var/home/core/zuul-output/0000755000175000017500000000000015133660170014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015133672235015500 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000272434715133672175020303 0ustar corecore}toikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs$r.k9GfD J}泖i.67[U/[Zo?E^/y˛?5o5P??xA[gy}}Y}̯/Y-/zuWwWoWww_m?ov3c4_~UgQWBe6O,nwbm.]}*L~AfHe*}=paa㑔`e0I1Q!&ѱ}[-nAWnvt%H+ W$^y(e!xrxv< |N ?%5$6) Y5o? f۬?tT)x[@Y[`VQYY0gr.W9{r&r%LӶ`zVTng|@E1.aLk>0M.*x6Q#%q^Hjf4}&|dd#)3c 0'Iw A57&Q"ԉQIF$%* 4B>K$2/Fmt΍L1 \:F9TJJyeZoD't'oI¢ɾ3Ww[^ϴ3](uX|&kΆ2fb4NvS)f$UX dcю)""û5h< #чOɁ^˺buk0s8_jiB8.^s?Hs,&,#zd4XBu!.F"`a"BD) ᧁQZE/i]Q!]yh\dN9:bġ7 -,qӸZpə+1i:yWO[l6ro%-9}tX 1hɨ|аK"B[T'A25T jzdz6"ٍ߬i3x-QI)k=-\$)'/N̔ڧO1_9C'/-#\1, Gꨦ#Jҡ/츣s6KT[<~6շ}~90}(*T7siv`̞Qj_P\ Q]GcPN:E↖t8m1ga5"f[B[fhT/ɾcmXQj#3hEEH*Of äE@O0~yot3iYhKjWlwC[A)햖>r?tRWU1o6jjr<~Tq> `=tJ!aݡ=h6YݭȾju\0Ac/T%;m]~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0,d#1}Uli}[H?)M"뛲@.Cs*H _0:O ^~řc6WK JJ5Z<;J_O{.Z8Y CEO+^&HqZY PTUJ2dic3w ?YQgpa` Z_pX)𳧛ƾ9U ^};Էڲ7J9@ kV%g6Q{jv *ruI[|A֐M'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l{\Jk}8gf) afs'oIfzZ,I<)9qf e%dhy:O40n'c}g1XҸuFiƠIkaIx( +")OtZ l^ZNCQ6tffEmEφǽ{jt'/#=( %X$=rᫌqpMl)QpL F2G rZ5nmOQq9TAQ;mM9pD6 N`sC4na~Uc)(l fJ>]cNdusmUSTYh>Eeք DKiP`3 aezH5^n)}k~hT(d#iI@YUXPKL:3LVY,ndW9W8QufiŒSq3<uqMQhiae̱F+,~C민v= 09WAu{@>4Cb#O\9fǶy{0$S:z4efb#hQ #_ފH&z!HAd |}p TRi*KsmM+1 P0W YW ].PK$Mj-Kp`zbbq$Igǽgr&P29LcIIGAɐ`P-\:BPS`xiP(/T)#ia-64#fڷbCVg峀%ّ sJV<XTtPmƄR$6~ :QbL2}q|Aq0m|Mq+ _ERƻvT񟜾[mm#?,>t?~=˼l?ff>\fbNJid % Jwe`40^^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[۷:>OM=y)֖[Sm5+_?&cj.i ˿7^1+]h,*aklVIkS7d'q N?s%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs܁dIv [@3YN2h-S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTn!tT̅Rhɇ ќuޏ¢6}#LpFD58LQ Lf~/EOFZ2;嶑, }t&&\5u17\I@ 5O? ʴ(aPqPϟ'Xa>EE衢^}p/:F?}bi0>Oh%\x(bdF"F 'u Qx`j#(g6zƯRo(lџŤnE7^k(|(4s\9#.\r= (mO(f=rWmd'rDZ~;o\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|/{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!yO){~%gq~.LK78F#E01g.u7^Ew_lv۠M0}qk:Lx%` urJp)>I(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Qؽ1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==qޕ6ql?N/e1N2i ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC˴vҢ]+X`iDf?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W/{&Ά+4*Iqt~L4Ykja?BH 8yݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'$ cJF"~ %M2Mb/zvMvSWx緊dɱݲ+yxW:r!-h{D ˸l % Я$Wž*I~N-7}Յ$ QJW#ˢ+~*mߔci쫢T2r Y$ZGQ2hʪ^]7kSY$RKmȾ*LU8)x{o-Ń2W,y }oՌq7$}UfƃZjE(_MM&;ͱ@U]A_G8'Y)ODLOt㢚EZDǙ~ E|;GI~g2SRmlNIS7Џ,//C0Ɨϙg!{|- rr 7\,iؑt7nx~/+1Y*#hcP]' Ps,FŽx(Ӥ5i~_+u{Yъ'9^i='_yJe z?U#&`؞ b" Z8L/HM\2 -ߴ_~$MK \ψ~I.R C3xIzzp!B>> ]] >mCst 45q]8N! #YʋɥZRku I?2$3_OcS7s<7s-lL%E3Y~u^ [@x#b8޴<>m\%B O=˵uwX}O+dn똿DH~]y#x/=R /g;ludrݽ DW&sK]_缩Y,Y5z"3,Pdge,mpKY DD [.x=MTb\[3; Ŷw.㲆ţ֊2g>1"u<ϳiaevGRغiYÄawlH7|3 ߳,y3-Qv8d(G-8O2EtؚL' ZYuUyzibK~`=Z2[Xk]d'|6Ҵ%kZ R(y" FBx59B^|TO.FU!:?WzE2M뺨^OjRꢚO %oIU.›GQ(ΰN-'KiM"N4є_BV~YfK10+oUyT_b<+N`)JGQS7ĭw?Y45@*_r v|E¹zDaK+9??ex[w?<)V<A~=,ѧ3hKDX7V7v-3Gݕ,mƗcJ>28AFxȠ=K#x rƝvީP u[u;r<}]$9j 8hYQ<z[;a^'.:Y Hrg[K`oSDL%$(OמF2B MkX4H:'>wN Q9+cxRSpE$)CIo L[<װpFX[<-沔Ga㳄gWiko E[Ul{tz jKCʫNH]i8:z`^SWKP 4"8)>>~qY7<"8Ni.4gYz" jT:ۉO&QZ~>jӤoR\&4-&uhaSU)0\}wa'GI흖ӦY|*ʴ t~1|1 䆆ߧ]ܹj}&4z'}BуΐO:|.VAO<'xMH`,rb8?cJkNR8FX' o:Oξ(6 oI,8~ ;}g4b"tB:  kS:o)muUYy~l.;>=/O?w$"o8_h0m);^cyY1m㨔mΓE%>l0ˣӀ0K;$Ǹ}gN|A8"4 %HXT{v/b$^04PuY a%&^T% f#MBrxP9Ԓ$BkGuBk&͛Jë\imXIc.B%qׇN8ŭQ2Ӯd0Ã'QhcfW4$H`A?$W=qrE#˩ D+ , Mc(4^E{`Ry80Cɤ'.O'X0RCj58@ur|r8K@mF{gXD6<<^F4MX;FUk^ 0K])~ӨJjjh rMh}4CƔ.&2\gCb9ΎZ3#?'"` 浿2a=!qۏ[D@lS@8E.:`O[ILoJ~ˁ'aW?m cG @K|?b2ʪ!< o(+vwj =1k,]haTi_ZO q6&Jinjiwfksysx!]~*U-o?T4K(WCk+VFf]i5`CF 6ا!nDiy!׳]?OSg2,d [6}QD:hʊXjlj_ }RxtGBFIC*})W"`J 4x KC77$Ԟ/NnJo:qJ06[`&)V<wjf `l>lO v1)ؼk Xq Kc{)YZ^NO<<̉2 ةnh$=-sM!x-W]KjvVe<ժ+\/*iU+ XP9[U}{׆ߴP4nrviuby՜{mY^a4EܵHv`XoAEqfXa`Yr,i.5ݥ71@iVAZcAlU\xv]>[O]2eX/pAjhVa]scjV P&[!-ϒHK;"Oq9DqTƒ`Υht 鮌xx7קIDgtKS -RiP3eoHTgʶs[V3;s!ޡb9ëK]EbWN;~%dyu՞_f'0qaw6"3 Wțzq2dip,*2 I!w*|t98|Q9w`Q fO"g3z{`m ^UW9BEK$>t- 0x)n+"=]Ҧ_,`6=C"oM=ש$MHUWR)^qlD6`,2mrU例/hT;؆3ա:[Tt[b1_Ѫ&n=#YGyUIºT&Wt y3TUNPynQ0U]+cd70#G˴$L *=i3su fXw͵lG&8[fh|6}L[IP:م ;2yAuaӼmkgFgvƅZߕ٥fߑ.O51hh>d'Ll.p{tC<(_ P]vt2G5yޑ,l+Ȏfk}Bv` 4Ar C Pݚ vo4.=/Г5n7P;7R @1ˇ鎱87H{;0锱*C̻ Pf:!fp}`gֺO=0mvdP33&X읚 ~Llp8;2&>Ԟ1 ;hu}>xaŵwe`މZxO]&6PZ3CZo -SvdD/@aADcxIU `֚`@0fNY|$>ɳtQkj!,zȴ1gLo`3|<~wDð ĴTXcqǝA*c % Zh0$']~}MN 2Kdؗ5CBqohvg=:m\KBlQOJE&}L0M4s,щ٠}!%v*N%^cs{"dY_ř\5>YNaɪET M'Gׄ͜YValS׵~a?gy bGL LmP+/>Ʒʼn ֪g*^opR)SX?xwdz 􌁺`Wu=}x^F!31&A=:ԉn2x[`uņU퇷{~ɇS~}sWd#ذ0@َe|9}pYF ?29PR=ndNOc'ì~9#˛=,o[V"D*aDVwñhDu(T. ~B(y0ӧ* hQ6Ab 󽇬?/BSCA/19@ M?]<_^y0po4uwfa_FB.5@+ ~HkE*v-]3hݻ=Љ@;cO@#>c z,,0 e8 FY?,xC@IĻ9[{k4;se/}[eгy~!}fq4ĤMXLGL%za-$n4a?ldݣб^2#t7=)G# e]!Ȏi6ORܱ;Λe7KFs0_SU/]H׵!a=q9 z.Ź&yIK%I҈)`%a2`q_:-3{h"`_.&:xELW^$P0 S-n —c .mR*SҀwޅ2+{i3P|e.V`AO>$Hjj0r:]j$2@ DoG8C4˲D8XY"&,A)IHE`!]!3$/9Wqy)Ajh iR要>1ożF#ff'B6aL.ިF~a+vY_W_}2"Aɜ8lH-{Ȝ! ]Qka__p HT<5ȴ*1qe8tmm+̘%Kl-889$WҌG@exzrn f62MY%z9Iv$I ODyj.u!MTiv"`zazrvv tҭiǦ맶9vb 옥D_حwjږIiY¿򁪴HszUƆr`l阎 ƣ0rұe}EU>ق_"E3+Qs(#'N*,fIEƠFZkB"]۫q׫)N-nS;X`t ĸϏ`y+Gd;UnmT>!^\بuaK𧽳ɹJfg&kSXЄ?,H_ Wa}sajx!98>iQ~‘"_TMkxT⁃rmf֎(oA22rs)H%7'|y0ѭN;JUKbqqiHЫ,/1UoSt IApWXdSfQK\wB1Nqqapj @M |cCÄyiv's&&hAt!w},4H+'rGȬ58 fDp aI!&% SF%Zm  -)١ԁٖK ̽$m;gv`a*N;M/Q΃D9g~YB{Z_ bY xrosk)Q+pb'A6v egżUFazkb'ܢF>^O n6#J;v>yǯ4g\Juۣ&="Y-(`C' [ :PPJMZ_~/U>ߚ[B:tBҧJI(݅Ѝ.)}D'*}lҝucwtSѝK;kN4RGRj@N:O#y$넺;N4BG@NzO#{$;N4BG@hNhO#4x$uB eO#=P:>𑄆[]%B_j9^,XE=gytd&N;@ʪDSlqY,$(8{;L&qaѠlԂ="p-srK޹pjի E6V\<^("CUgy>-8'~U<2+EGnc3I"[.ǗiXD&DC jN'piw3pF!ҋ&ڍ?x  !34{A=߈ i :,@TշF`XMa%<~<Ӹ=)Fda8XH H>_W??⴬ǖۧ.5V9|U6}X-<9_IBMa'\,Sʊɇ6knA|R ~}p{ƼH3֛5|9n6ڲ-k%2n9u8i/@ w62b~k)<, j7%ǎmw>^9\-. \"X:.,Z4׳Qdg3Y@ ە1 XHk &< 6'x^hFBwL ^P&RILm @s8Y?\,ͲV 4&{5)hDimۀ^Bzb<*,e V6 vhr07";nj ~, gu95n\S7vsFA x2h vtQ1’B"з#P^ۥ]KrDiJ=*V;?J~Нd,N{EL "}#%z~\(sgC/BA1Mg,i?"vJȚ0pO_kP35z&&O )g!MwjP2)Y`*hɰ`ʧ-#6`NdtDV~1Ak5/9{`P^2vWLiJ_dPVƱ\ [Pb89Ѱf f ꀺ[w5.j'@BYСCte\TtliRUzmdOGb)K>-gR.(}j4Zn˪t9A4yd'7_5NS.!;Ld8l73?MQ,맲&jxcix?gx+ixJ=v~ӲZu^Y?|zxa_-^IGc^(Q|$>ģ*1XhǺd=Y߯HPy35#1QWC=U QcXG 8J8Y* zZk+/"!YJѕo{vjZ~Pƾ1A׃v BNeQ-L!ykAR2jC'%BLœRKnHX'4_19GyϏc.VXːTs.g?" ޛʼ'efR=K|e&H# &jMG;I'r{G+(GD60F$4("ZMUtI'8n^z,8[WW0%9fƚVk>ZD g#?j79!,aHӓ eecBOI3:Jrz1 E,lR#H//;q"_+ D05b|R#E/ĻK ׫j*|3BnYNnn2CCU*Pע89E]+RKޚ4 hdtVKJI'9+ʠ )}}ssHSvfň, ^FɎ]- 3cVI~O5 n֮ʹ{KOQ<;{:yF& Eo Q 9.JX۠H:lvO,81 e嚧k'$>ѥ2h<=XIC3ZU5;zN׹R&\K·4&at roH+R %Hz6,=4wf`14Glwmeq\ -- hxř| i5ټS5Kb5X&Y[ԇoY3NKD MnL"~.fiK5"N&?TYmHPiӼ~Oϝ9{\8;(Rn9=32&U$nXQaG2Bf^pE~F˻u] i]_ :5<s(?@{og˹6ӪkXpaxAcA5ɡT6ȫDbM([,&H:xհ89Ssv ez^=WԶTY/4c$8$тW(em\4^nnOv.{zshE2{F(1`djnG1>xjAO0<4Nh shsA DQppv-#tao3E ZNq-JĘz-OL[$زf1NĘJ5! 1tW⌼/ԣU-dur]:N=[0M4S%;R$rq8\F"s&ۓd[fUُ7(%T}GnU r&ʀo&g]o2HpJכFbRA(6.}TKtbws,8Njs>&rGTVgW&W` ǧG-,8 nYA?ܬXRz/Yp fUC# GKx44=}R;C8 #E rrsck"%\9&mYpl%YH 9!Xp4$!Iwj/%M,9ISA ҅"F;\ y(6w82p~ ϰ^r4JbLꓛg+'URAF-eT/yj"4 (+hf[5$ʣQ(% 6S=) KhLLOu-(G'e\5Ufr0{ ժɽ᎗TN@m?bHjowOpԶ8/3Z%QwEScl#r_3!'6%IgtG"T$Mor\N!0sg vLD05NE@U4E*Ǵ'Zݢ,]ЗC[nYP{َ|V OnAUl<4  Kc_X0%V)N?n#JtIXJ$GxeY7=sޙK;_1ZyjNsE7!%>Kυ^.y@v%Fq1A~eTO=]Mf)Z+^Ѫ5H*ڈ>I'o,8N.j͘10:GoH&؆W[Mt>vYGs‘{5!%gxڂ譁h{$KeX {!aswZ*}< {ys'Qfs+ɡ)^ E(wvqi\XֵUbIe4bxOk?7]:N&k C)6T)1KmբB)\Dr2L's_mo%e .>lڎ;RvBU%A %)vyUo, Jbt3~U;MZ5~'&/dz`Z6i𽋈mYQ{Ƒlʀ@H7׸YC:za BҎv~g␢LRCQI䰻tuթnIi&>?%zyu<~̆* OXF?q^_r`+IȆYq]m+6Zh2SJ 8Q^+F!_Mh'> GC߮Mi@v<UͫK[sqJ~3&d1w@zQ4i܃Qaz!-m ?/I9-oK[CՐQ89ob^ZF  :( GK '6I>?|4k!Eqܮ$$jd*õ05lj[{N>gˤ>(1K N) lRh4ޢ׷z&ǹl<= ^aSU,N!+''K "P?  \rO_ LQAd2.''1G$=/$zOv| &q@)$qB1(ѱX~VCxÛDo=-့4Z " TD~y>k-`L p7BC=МкX]qDaʮ5Eyg%]DFCdizd O!ؕ׌Y(k3|ISWqfz7.qfb*4JS6e12{GE3C;LܹWΧdܺRI}hW >w汣RQ737W}4¼D0n@^ MQɻ"ݛ'W<턌\Vn fU׌8iOYzR+IFq1KpY "/`-/َܻ5pA\OHծ~oq/Z '(jjD[e q*'T L<Ŗ<1%F2s!k!ݱheuKi]RZ&;$zcG7Tv`)d¼K Sa3A[@H1; wA(m^yS᢮Y>.99;B3-k K4Bk,~q2P_RSa2CL08\szIZdcQ`ne 5V@ )|̊I]9 R:bZ>;*߿TFMcc}QѢQI'0DztrQѮ,d=WE/+Wn!+xŲr#Z>Nc=>;J#U zǸC/0Շo7]Q,J: ,%RaeWQn&G~[֪5Բ7gumfH }=DMu_꾷&)ч<8YG֒[D0NP!cXGW ΗYݣ\rT`o>*X3R:s r PZ70eVd.#Օ#LlHO&?d-E@w8:Z~Z$8H(zסsMS`5;r; lZo3XQǑ54vm:`[#`PZB=zꂔAΎ%[Bi/$?x*/Cs 8EɊ#ީncp_ά +Io ?X Smh}ɂYY:7J9 U3hJ}X0Z6o[`20x}⹧"?1}ށ?@K9]}Q~=} W.AqgÈPtX,i%~X-Fhga6<:ѻ )e=A^~7BFV,V?Dzhk7zB27b7#׀*V|~5Uտi- nn ԏ5 r3'|<,Pȗ4xʼn6Ϊd<0瑫`=~Ywy ԣO'7W_m<B?2?S;"7{"Ce^oDu>e$1DIBCuQ`&:S9?J(R: oStsqI/_m$jPś@+GVS~>JJ7ZJoY;.7Nϋ uj6?QI}0W`bq_3׃*r B>V+RSHUM0"BCf |>s}BNrNPUH(&yc-`1+bMUx;iYi|M5OS2ⷹr' vAA1?/Cƙh'5 < L[mH"v%0?pdUkIV\D!rDX&}DA_W%8oXW~ F}B Əښ 8\l4oz!xӵ8WO1p$F0{vS-+R֜&+@^_:7>F&y=?F*&QJkSz1r'_p:| |n߈r08JP$ s/78$sʉWzi [X;x..]ݹ+^:sIH+ډX2Yr"THC>ExcƀeF'fqIA=cvPtwmL\ :O6OJi(o;)hjgA9dyLjOƙ[b\r&C;NX=VZV[M3){Ca:$;r&Z>KpEu~5Bw[.p T au(0 q9 aٿeXWеHN!"a;9.3v[36Cu,SQhE$X=5@֔TuUˉSYH/C A[,xR*-x(5接`(8/m8$wCr̻_^ҞuME():mSֲη|qB&"4.EDg A4Xt`UԩiQmLJ6`D Kn]E-!8zEGEbLcכ ~PP͚u*>z8|xG)jѐB0WS[$E0*yՎ(Gݻ/n&X8w~ݛ<(}+98E{a?JYz$PWIn%0#}%筤,MSg({+nf1+C_O~.MM7o)B0Z+%nG| =/1I}{$!]f--O5--4MWjn8,l|>lqym} G7D):Vqzއ[s$~jkVq(҈ag RDZc4&XHbCXmA:F!2ɭ;(v)O-Z SeyRT1˩^!NWB+"PŞ S~, ro=hk! .xB;@ h% n;M)Jp9D7̄Ֆ38jp`#mu*k*jk;^nL $D d6x _S%ȧ(+nBVuny%^? K*=+N tq9Ca8ޥZpAԎvVm6g&@)#!v 9x;`X n;UbOF+U}3ٰq[A[ ߕU*-S06CXf,KAыB zʰ(UTQ#[ 5]ͅLn%MA -*0-S)64UIʾiͮɍl`/Ny z̊xaz{JC`)_< 6eӜv)epyW(igӬ|MAGq*ͯW ^ȱ5YH7|n+{bĭzg:xe5RSbdJ+m}V髵RYO="UWaI"enu9l\T|7FƟ wfz}yt ,90;{'k)ǟ6'n3a -t0dBH!OC?0*{=;N_O+O~Ѓ'zZ)/)*qw>x|kɏJguc ix|K6L`18%?uR`Sٹ9Z۽7!}6mC+X"9 >8lBdVz7ie>Q%ezLA@uw?-O y.|W)2=CIeWj]^{n#o7hЮ8'k ;SDAfVYw$[NfsƀbX WbmXCv):X+,ŞydMa4[&Fi4(܇~׌\z\K6${LZ_ sQTdI.؞rRJnZ sZHk51 G#h< #!M:dAKa``#4$B7]ec5pQ’]f5oY+飊2p>`%Dk*=m(D-vS e&#(KK>*`ec`n\ Qx'RLҨgGkЖyrMJc_{h)YD.ֲ(vMmt2Kr@09pez{`riql/,gM0v^>t,{Tss$[ctj0}{~F;Ĥՙb ҳf $IV}LK9y焽E,Cܕܒ ׊l?V1ciHݔ_>l܍3ݰ_ lR*WeCwU}{'? "ls{z Cp)@QП rpزPok/$ |1ѓ&g a@6pn t3+\+9H:y@D@3ao8צ߽͏UկI%P_n@GԘD6ѱ\<:$7 0'Q3b^E.~9k>G{z4|=Jۯ  7 K'4:eqʺĤm<37aGLǰr鳅}y6 Dd%A -ֿg .& NFɸs-"\2bRLJ n=z8uQrTbyH0IJ xpaQ՞~ġFW>s[ H2ɦX 8\$Zbf7qW ruwUF?Gl4m W4hQM. _?zk_{PMT|KR,{99]7#NJfҨ͍+jKƐ9Zla%c>/Z@ K5&/XE-}:&gp lc <!;#]xmƠA!Op_? ?*vClrkS<p"5q*YV8UA]Z4q Pq2)@lHBsr( |[̿Zb\*}zFOo[ӎgYdj.tzӿ5j-R8(s#2Y/ EY*/dx 6Jj4r4mDKj͙U'7DX0z(F+)@JuL![U,l9':iձ//48X YnޟaVj{tM2Ѣ V[xˋ}A/tA?!:y̎DH~RA&}Lڏ>'^0:R*c'љNƃ^ŧLG`/eɽ?Moڹ_/x>w։DRb04HtOq>5gfCٴ?O^,(}}+G]cwt1_B7.qq_ZS|a85|=rg5-)=~7Aµo &5]ƠJZP[cדe7m̹:'/szqqBʬ`` 1!Dp.$P}oՓ#swI FôM]ӛ[(ܻ/ ʘr/}LXún [G|0&o0`U»^M>G4w_V^fiogy0oӇs-Q&tid$'>ZrGM^,]vqqc(MS JbkSELػVn$WysxmY,l2O3x˲c_R-ZR]V: ]dWbe0"N{փz8/nQg(p_+uPl{p ajv{/_VutΠxY 7aϙĿ>g',ÇJ&]o/˒H+[>|^6_UٸWwr8њ̪ilU-%WBdb~5܌JmsޗϷO>l)%{秗_?ܮ l>nV}t ݯ܍a*8 e@o} XFr}#5 OoNvt_~Uwu^[+.0 [0J}.e!IceG(Y 3w 8d(ƙsgՎ1eT2eH ~rἽV: ȜV 7_)bF:2ψN"o3&9ͽ`ő< FkKJc-v8R234mMhB=~~#DP#b[q j*mf ~{0wWDh"_zwgv}o?bDr\uV@5:}+fH놘a̰M̛},8'V`lvFl58ynE i-|h)hRP0T$@SK;1˜Bb%9aCEƛkOeFvʒrq lF(rBm[B5 #*TF}6fVOu,4Xb2K##VR<]˻V>"^E3>DžͻS塮^&I8Yy9!R%rJu1&:d.6QHf9 R3}4!B*ţ a欼JȈ\>&Ey1 AC0t*)-K i+"$k403u3ȇYļ{YEa3 3S=@یwdKоi>@CE=F8w+c05GIAɕ7X(d,.Z^B4jZd@ B5 [n`!OޗCYAy'4G>y/znzU7zME C*'48x)́eT HPaC:}yñP)fAfNl3g[eॢG<"9|G 6p CrArL $x߯xlE!r@9y|m.K2goWO_9v髓و$.oCJ Mܘ2 Ҵ!`P%|7Z-8\.55F*%F}sж9ACE (8eLy;^ #m9h̐{;|t2?=TO&ڃ׷˱6VVsDf}wb=!LKs*ēYG%p/MBPy?4ƈRQ: G3 D>TjS}p3]"Dʫdˋݺ]WCF,#l6b<-) i*$!E]۝%@Jܐ\@w+ɻSdQ8FD*nkThD 򡲝hdS9"`J9sSpgd7w/WC).ZX`q)l)O.%zMm=56|tJHl%vsn1ś`[+p9PvCvB`BcC{1+@f D>LCNIےg$xK0`ʽ|Ps1 x4⒊)$eANI^S"H }C0:Wu7ހ$n,[2uƻp9zơrN| Hnj14ɽN&r rcf~Up Gʕ,A.o+H8ch|, :AG3AD>TjY<';NHW9#>ïrT˼+ag9,sJ",#[#b2;\b a&LN& :OA6|tt _ԷR1ƅ(c&%\ u?D:xԧ> "dgx^W]Q3B(TzycwtZ'e04R:"'2aR=voxԻ#"*r> vz:]p1K㑏 n9GiZ":RiE.%lĭqeKNlN""ΐ<ìMDyP{[iChHF>T#o=(s΁'sA<}GJ1Oz-nݙk1qDx>ͽ^@`͢O}e@3:I4,ߝ<@j?لhg)wf|>OP!À +1k/l`dv,H"aJX[b 6lPNXD,է!V9:%CWW`^DB2lDZ 2$feHP{yta>眘d At9/ U(= ht%[Iw3xm\AW{SO?<|^^^ư/*dԮck'l$:2F݇qt _/׷MT竪 L9HR 5 yo7`V[4 !j%4'i􄼧NUmtPDܥQbl51 3 Wތ:CV_!F}@];kepnF}ue洢¯%jNa\dENA٩V]pUq2[G.+yYc:"OOکmA"V5_yv 5q}[{mQl4KS;֗x[[]E"NbVW [oV]Gv|)(XF~FAPVøF&Swv&p&]:<+˱~;_B ]0.e#n\_wMnIQZy Nߡ}zA驜e) Na*̘ (N/ g7Z"ø4fˆ(KZթͯ;ǝ!ن%d*;ٯ"Ovizr(蔐>`XQoOv}byq (=?|Co6jݩvF!j4>R6pOz}A73)NTzN6S2z @]`B:%=wosݦޝ؝ĝ}0撋 瓫iuxpv BܩжQ\DO[*pE[6$ B%LOQ%C^`ns~Hnɇ^&ikMKgp=ՒLQZEQ RjVWU#PH ;G ; /x7([e`,[jXXԊ1< -4jՌT(,蠒,B;Y)Dzdp140 /?T^j,U5IWnwiM,j5]:cpOx֜ݥ9O;tg`^1" a1vSnKۥrmH.՜„i-R75^ {E;ixf~ ߬ןwt\ G7uq621ήOyT{}0=?dgէjV*$h4k;ew,쓗wìxLnY.C tVЄ䂃-lU bk&rV؈ ޗJM*-Scrm)VX9I'̂UuV&ޗf H}ecj۸e6MFm##p4mdxOUXܭk'3-T2 ">{_& Te,g6',M LF=292ޗ#YRr{ P\dj6Ռo c.ifr3S\7j]?;co.ix_VdrG.h;1^^ O8~_L=O iZi$hfCZhձXx_[9^r|Xi$Bf9G(,r~6p9+bP`4h0-d.Ҝ$ S TʜM qe ޗee& x9UaNsO4U2:wC\یݡX5>/wqS٩yC^ 73 ݳ.-ɃVU]w̑6"SG4Ӯ+Oٗ<\M&1/fi4Gm2TYژXݍ@ڴQ}%"s0ETAF )eq.{pwau Z]BUhM^Ckbi[1NsSCwTTrN )>^zXqd6d(f٬K?eԉ!)0S.M3Ks6; /./?s/EO nqk?dK픔Dʘ`i$,,nl4\d- /kNG@pHcL Tّ>3ƫ|b!RZhL9 ЦUBcʕgͭݬb.x|a؍1qM X2QG2'xk)Wyֵ啣'԰tƒXTf%O2}4 7Z*'HΥACbI;U. Xf kޓlPDQ]SN',EK6thVj(D`rܮ<[ߣu` zt0 mt{lXo[WHyxήRfUA\Q,^ vLb§>Wruhc#JJ ڀ"c^'u xBd9oΎ:(fmJW"2b,Z\O޶تjV(65G}6Q|hu̷([e{-3p%ѯ-6h>PP}4Vz Qm<}"Qv1fX޸VICZ;+&}`,Kjk .zqWk5GOoV &ēw1^/-دEL;m,zwQox,'O <4\& eA:]n/b $E؀YO];I)I6dk/J^;͔11fmPs{KkOnDLgތ4@_x ,tq0(h;K8ybT>sF T3XpqA ڝ͜myZMY wV'6|ǀY+ vHS*9],vyVmEps%n1Kx¸/PƔd\GHt2Py" ra^ ,Vu+ӤF-8) YOSpe%xk8\& DN3&`lN%02qrU:<$Ҥ"ӹjbuj?+uz beI`)14J .!idk_ғ}YW|W y^}0ޔ7fCQ/1d`|^G: X'2jJM-ɹi&x!MYrm6#SwniɝPw5g)-[<~t Y%vboS5eeb7$Iq`hYs©.B&{ pw"Bo кI\դTrRtp̲48 je2G[4NK\<*|!j"Vsk!Kvg^,WV5I ߤ.*97(;Ttxw_7dđ5xM"3Lg,!y|*lt)&10k| 5b~ tڻ"flZy>tm 5.C*hSp%1As0żBg9)x{ߦwxqPsS;݂V*lڠVlp<݁xґV#keSu;8̈PEy'@_vz {x f!h'n6/+W5@OEA6k&MAdإ.xr<(lwe? fAS H c:ܞTyS7[bՂEcm˞,O٢r:Mo4{ڌ5-Sa a>gG}G qnln2/7T/q]1|aۏ; o}U-~]y \nM_Cŏ? ,yU7OUD]oYKaټ*Vr1?_T5p헲=/S^_~Y# %nPD{W{ ćE]lp*~/R} ,u.1Km̱OQn.R0IJ*ޒߒ9<*xJu~ex((P9zr1H+X30זy@u۠B[ F~ oFH`Ԣۜ^ 3Gբ{搎SQLa$[zN"nU(D; t&"JZZX-4ϊ0{E|N~A ")򨭲횋 cr&Uu ^ ū{;rӗ$6Q,cLQ[!#Aאx #6=&"ތݥYD_)TǵQ 6jKGc.cX} -$3n䅶Fg~1x,_E2KtK k>bDGtBIh%$0NQP=U;t}ɝBr^cW&0by%d51q/uVzYJo@HP1 2.+NPa8/ìJ C|~aB5EEU8],֛eƏlV|,< 4TQc>JUxrpI5( }z)adM,s&79Mƣ+s?q3!c #SuˊIa 2KKR`Yqi@j49:+S?o"fd[$ӤZMjO&覟&m#lLdO+#/]t}Ԇ¢]o;OGlǿ?I#!kH'Ao#KxaT}mѼOcBre]=] Ճ3* WRۂU1.g1#lƫx=A|^֚ ǗS\"0"f =mP%y8/{WIʮ"T]G]{@N0F`]|gtRT)j]Bl#b#mv 9['_ܷ0Xi EbYolON]N^s*KJ/QmČr2`J)u^<#o8#XTZEEn+. J )@)1늘-6\kY`de$7zU݉-_[z!ƺcNr'P`O/<" ̛|rǔ􉽣#JBO6֑R񪵡&:}QTTU+jk8b,hY*Jyi3U:1ҵdE80U%!qeMԁnvM:BS֓BXNy7V#Ҙ22NʯrC&tS|*{_Ϊ^'O { ZCj1;[8VZrc^V\aiF*73fSk1d~-DᆴYy"WJpwd`7Ō-g\kNWD11ؼs0BXAnߩ0ndo?VԂ^RܱIQ+ K[;ڃR iGjCm'\>Qu\:lQ.Y\ٛ`?l:6;-7;}2`ݥWp^YR'GF iG؄%*#D-3hkNyp .h¬0ķ؄r#8jX~`kKnx.#כY7!TZ򆎽Pr}1@V [.f{\QM.!ֲfJVqV*-Vm-%&[ ^"4dG0;/[*طۏ"9T;L1~itƷ0hi~Z-z:K 8sx"zFQVHgeħ^c5RjWݤ"FJJX i"c8"c + @`ʲ;eYsB;7L_Wc'X)@E/sswuK[]Ŭۊ C|Q7u҇t~ߘ>:}j6>Ķ~ 5yc$UA ,8\Z&fhRs}hncQ>F%[>m~s !|w:8K:!> f8;!@= :V)`2M@Lɴlz2_MI疕ׇ< ~u\r4?}4nҨ4={uٰb;'X:?9_&9~5C‡W҆_!}X,'?˧zuUϟ@U~ʏ|277ӫBηdozqgS:#9<oJ"/".3_Rab7iJP~y+_\fͧ|M[zDu]'$M&@8-z%F.áXgiCtfgS_SRWW1Q\|e`EʕvKvثA1V. jݙ=eNW<sx\>zpH~}|}:|z/Is6``Ψjl⢄?%qa.8.&0[[V61?Ϋ&[2כLdऩ~F[;VxY]xY8x&/rE%&" oB3ƍa64'&! g{P绖Wn/ǁ2&0^ke{2bdbEQhwcN puMmG(qR<5%7өhL`n-[< 5 Ia`kdJoD!)a-/?#"_z=y*#ױd!w`sKz 9I{EV\v:89:c9sreqPCj xOT J43lյY  T/<0Dᴷitΐ:X#\WBB'QFg j=UӼ\}_fi07x9"h$9bQٲ_((3G_̣3A3#Mqz+r䚱G]_̫.gun- sVV&YW̔*>Y0pA\<}p2_xu(\pEO%/~1w ſ/O9Ϝf,99=̧ p;8{zs83̈!~d E6`zp7AL]~ax|C'5^&C ΄܀s:򨭲:V^xLlPc`-:]or4i=-1!10sz=5<c_:Oxp ||s3{ ˯hμXl-~v-/=4\Ro,x3EarG x`dwZ/E] =Ԟ]wW]zFh%(hARmg69'nQ`2Du`yR#Le`5iu0O14s(r2:t--5CsGn:I7+zY) 'N?0_z+-pGcۥfE'iȲVӪ 3Ì2HeSzVA` V|mu]C~`~ʷC)sx]`M̰(@Zx--ٙ=fN tX4xw/[ -كOE=mg3x:ǭa`ވQ˨tuvJVw0iPg9Emo,.SH|۱,:(ĪGa^9JMP*l" IX,8:g;>}Ǻq_o =n?{㸍_ns;jK|d$ /d:a3"nomǒ{_Qe!%nfUŪbXU\oH] 5B[4MI+?g,˥O #zY6M{O WAG8i9]!z,pNW3v v+uQZ50<mEo ;H"snQ|Hc|D$qBYDNJu{aU&cj>[rgtom7lX ݗxy>pQҥ` Ӂ+0` IDki"Ҡƚ)@%Bm`M'IkND"3z>`^U~aigo<73alV!fwt"2/Oi7wƫ1-S 2;b3{ ^{ˡ }/3()h}y$rjϗIE1?K7 m&Ԓ%Aճ\i=njmbUTΪ|6Wbղzy`XF[Xw2~ ۬ة0MكZ.ɣ~|5l݀`34`}(?q2LxѐUx,]<FM69m {2jLF_{W2M+if{ nvACOz\ ylkq]ϳ8D*`Aϖ0lGT.)t_@U{%i\SfbնMwXj |B02F1jE]XoL|?֕*V%`>Nn#[ @l A߻oW%ٺ'~|%ԉ"5u&J̗GAZ3w iF%Q"/x=X#DPVTi@)kA5A]FD\9#("&$C DDJsX >me(x r:yV07Us&S6m@`r:UC@Os竆e늊.j*:5vjXc@QJpIVn*v rou`x.[mɽ>Pp8dmJ̝]ttjFǝ @¯II~)-Byghwkw*ko[oG#Q*Cu /\ H2` IAA?6җM=z&&52K1LkX=*H3$N¶BG`ÌiQp8+Z9<@W3|117:a&vٯQ%{ ] Ytyc'ɣAάkowGY\}lQn)G}xϭҩ8@?6h(s |bØ9s$T8`aiwVm6RʡqFmB|y05[l3xF&%ʿrrȥSXyq]{q\Tu=3z'ǿ]Dm5z]A5s6*u 5kG3؛pDBd0gڛ<ۖg7%"j0QE yhG븦1Im Ҁ06IHj,}"EEP_!H﹀qxƮ0Q-Ep0"A<ҳǕ#{˂6&k f7f>ׄĈKj!Xt*&x;wF݃4dm.KJ js{A}{;ɏ#BA{so-h4 D Sz' q-(ԋk8yt?;8dYEUl5$ Z85n0[C ȚxDqP]džDq)pb (~pVe는.jR:5vU0xMQ<C 6oloמyãvɦ*Q?F]ы@f8@+KxMaH"a5B%A!,&$IĵOBifi`DQlaH[Xơ .Z{6 "" &w9֫xƮ s,LGn5NKj>hYQNMI%6 @E1䁷kMx2 2j?j:Vr>Vriz5g@~g؀:4(OA͊hA ڰ#@n /3ֳlTݳ{9:ê8Q 62%enݎlPX˴*fëW?Y*fq &R@ 1:㞞i[SFìɔl%SjW3[/ܳ=C"AR+5`7&|!_m'N{ dl8B: goK2a`girBtQЩG'jc8=pxSFoqƳ0,{Pɭae az }Wä䛦 @h=qpxSkDO"LN'nO׬!C @g:+˼Ltfo &TI"bߦ8abElB`iqM ɢV t> {GSwHg脤{NhHjqU^3tx]ս sCw |dC#Iq$O@¦| 0~!Xt(鎖+zWs#$<\IqJϞ' bD b?z{ׇ)jG jP͆(ޑȚ{ax@-h$f9TrBH;#W$wz頰ݎ 4922#E\]eB7c* ATݥT9w"ٱ#JnTGMtڕ۞1vX:mrvue˕+N;M:9zGNɉ0z ~ݩ%zBt96/`t~awtYJ814 ! ߿ &N=#]nϥuҪԊ\u9kQlƩҰ&'$OrbF=# '*}^~u=#=\qbgDMNɻA_t `Icpv{VhTiQD Qbws;7xN{] d;n'gƑŸSzaڀxpzmu3&"1|W@b ?}Po9 A2,Je2EUꄠL'_8A'ʹ9xaogܠC_G+ SB(B`ǔ`񂰐{F >?3I@4 m\d2S)}HbغF[$"+A&efDIEu_qX⋠}|@ P]Q8v4߆B?&a`0A;Xkwi]J h\LmBT9ZOL6/FWDID. OqoԈ٘x(Ȼ25<7s>&-VΥ4ߧuqØs<Dqԟ)~G~u&V ޽7Y«++ᗔgr2T /JmX%usUoG}[«_O]a?8H0܇&6q>G$aByh=)}a*Gjf`eXYiV|~o?֋+Ϸڏg'#ͬH\p9<^OgN~7=*I~ͦ|xj klZ@}Riւ{9emeZ1|Std~5v=f6XKJaQ̕1 <\1- Jj-1$ѱ2'Iߖ*]hd|m35UpID T7 1!B Dp̅"Si`6T3&Zl1ge K?R|*M.:eLx6la_]NmFK_x/buaXHئ0D 1b !*1^ILP`v5Syl,=MgnJ5,$q-0f@fSNMk"WL/ b-B|!m_,G\WӺrʭl*W$a#Pä*?xh1SԾKLS?l.ɽ.%)ڈF8qe] yl\{7߾TG/mligvtJz@c)?/'.Q8 -df*!4At5nYqՕ2OWYpeRWV(0w'`FRJ DS42z1\x_V ?ոv&ṁ ߛ'o. aV(rW8|ÂqfFp(1[ߖ`1r+c.y3Z{esiß7[rU^5=PH<iNnJblUMgVLo_ըM% 6Hel}v0\iXLg|ٻ6$Uv~?"J?6>6a䒔l~3|kC[Ad3gU둚l$4pgiEojTMV63rˋ^ϯ~뫫//0QW|νgyGW7ҧ!| [f~.P([}R6{sпO]򭽻].ifl%MW /.͛^?bY-ȏoV5*֤Vg='&6\rNW瞻.yK9Rk֋,Ņwÿq-!HWT 8%)'[<0 Yc$X/< Ji9[Y0f^5>&&IMxXaw:;3oŁ2lg0P! j BEt3gZc`  ʴr+3 ۚMl許~[yﺵOA=`;n1 ŵk)~PPP,J+@pV<;^>AMvF2c3Ue^4#+[eC0޲{`%h<leDϵd4YWgn(ijȃ8|)h մeNv$ɺ/ W{A`sGMMqUS7 xx7j+fvXˣh;/_7L5=NUGXM4ٸpNj40q'۱`0ΦglY5SlЉvDڛJD_\X4w7Wժ{R]b+Mڪ[ .fF;\b46((dQ{cN}'i{ v)em?56lak|͌eYK_ 2@~\M\< :o;,ŁqԤtv'hu:2gGSΎ{_V,WlMƥCz  HI቟ʝ1f9N᲍ TffZ15ɽ٘z-Nk?lJa^iGj`g)Oc.*5mbzrݵ} ֚󸤯5ryk톖w피 ĄB}_¯DSೲp+nrA2W^t"-V](U?dN젱|)҄rhQc,e2 N`Hu(&%>z9S0n0%k c88/L?P"SYPAFT!)f$i;I21V2|yy[<"IַQYsp1#9ͰQ :b9{a``8wQqpgg[4Sp僳@EE@Oܦt4XaR̢wA`* `FXIKD$2 RjDs$rŐ*1#i|p㋹y Hq+#' ^z#@$uT ^Dqu,& >&G\P7_DFRe4>FeA1eRfeS{ 7j j٠ӓ:=i+MuTa13(B034%p$-QkNG+q:i&HzĔQH kF.Z8=CG- TɎ;nk"cKI1l0_k$r(󱖙o&{p*V)#T6aΔ 4u o`\ $hcc r7=hB2woz3ylXWBH$CBS Sw_ržw]{žwžwb?*S.>ú.>C.>[B[Yg@sdlU+Df\$Iy/Ol)GgMG;EA|Wdn}DekخDSu.wnkj`& wI{A@߁"kY"MyMջ4w|dOLj9+tE9tt➀I7sS΀+~l~Ne[qjt&nnM$#ށFRs) jIVJpdG1cAP@ ؋(h0uYU|p"#b)+̺(6cD'ao锎H DyMGTBDPKpZWQf:`PFlB'^?+mHָɮu:6.<1̲??KP8΁l->[~kz˨1,:8πR7ٯ?]f^~I8+?87/@ٖ*Adb=@e`)"1꽓RL]Lz^|l(tab& -, ^P\.`0]% 2Z˵idRn+1+Ҽ}wbxض+fr8Z`Z} :\~ob('`bb8'QDr.`TlcF TKUu`V1q9k?RZ5#Q6 ria=C'j{bnH}7JvN*> G|!6 w_g&{%hwIZ7W4Z:M7 a!chR7ӐblT^51~tWhⷲ|=w ^^x2|~__]=yyp|+0+8Pƺ5_baˀbwKQ\,-o77ԥtw Y)XIӕ7m:K4s&+ƏX{ ś;G5*֤Vg='&6\r@~ut3]ʑztP8Fh&\^b~%vO$q+InAȽRv1H'[<0 Yc$X/< J)Iڰ."B"<҈t& S^QPvX*D`:sl4kp8l8RKU-x"8mM:Uνڽ" 1mQ6=ǎ>-A76{8m2$ԏAl/L5b,|'K%$C!UT߳hVAՄwpEF9ŸX=| J6Mid"n~/e%f0B'+ƽqϋ,ބ0Uώ"US!S:bFTLnq\9*IOdk ofoq'yoΞUOÏ܇0q]>3c&Ĉ`sB]>LZ b^ >be<(Э4mQѯ1|.9W=97m֐bR9c6+8 )&]*13gtq썋g,aVmVN}T*u\'YߎZ`uJO ~x^P%ɭi ?=TgiwIC|%7is][o\+6楊d`&X, d_vx[ZlLnn-c7%[`d/uWdUuRUN@ ?{=^~6ɒؽI~vyy[ Zv7·C{lo%˿(%a{y_=<ͺEfafO H:_; ^vP07pu-gp?x' +oy}PiQy(A*18P |sMeHa]d\ 2 /F/gݨ0upo;ρ\dr1ZcIw&-kG/ktuIu)xUfⰉ0|mJqSaM8֘>-7}LA)cuٙ 32C-E芑Ѳ-!u.a7s"jf h{rU) &K:բ17pb' aWKAseai`JBH6df2lX@h$5!s6XQ{uc_}ůy.mfв[iv3Dx$8ϫWgmT߰{w~uOZzq~}_r珗=}l4<ǻ>uϐf| E*&v^dŸH2lUQ0{ޮ_NTv3N}rQհbRS|nyUP|8L69=nb΅y׵' 0>dl;ts#NiUBPekɕ* |ъsX[mf4~򻫪':kz[pW>8 ~4xDӪ`;BRTN^ G<~jУϱÏg %j w$H ˅Y^z.fcLLrs%뛱-]5hudnFwMqkvC~;;) 8<~M20d|%c<IfL-UZ6&k1{2Z<1Я;{߅x l.{}&@wk*)kT*HR Rlr%p8|^IU ]pS[3$NQC蚂H}UWZ(8n h7..rϣR7ns$ M")&";٣JG9Zz?Cƅ0|4딥G,z j|ЕN9F oubS؊dQRns$Go [Ke%|Zk8 3$kҔ}2Wі Tk #{TeMT2A3rs$O^흎ʊ;Mk2}w8:?G^oJLR-sE+U ^2ЈN!axsR8lcQ_Q-(%u #[cD.9fћ (GJ6{Q&ϐ05<8YB@;@\v _8ڂ0J;EoHh@Jf-YH%L9FMyjI8h JTU n&9'燃LIGqHSzc-0s-gOK>u _ KT8VTBqK犳2~ZzA"mV%4%->pU>jXLFw /UxϢ7*=|撉zϑ0|# ddXJ\Eu~Q:wI_%~I_%~I_%wI_%~kKr\Z0: g2*}1TCʡEmV #*kI@ZY۸GJV0:<ʐiۺ9F|(:gS)WTqx's$ex BaP@D^܍~фH>fQS%VY M`DF|KSgH>jLd'EbTJ=H-:q*7GQ}$Wשz}P>Dr.*qejGj!$p= 3$\ujǺgň3(Uy~fu<!{ Sq:U ȺXyEU4]s$G'+SU!+ H,HS%ƌX I[{́@AQ{ #GmĐʱ'J" ,F1Ȏ0<?:\'&P>SB߲FMY߸ŌSgH34כ14ʺժa\& #;7< ,!6`K_VNB@ gH=<#6[ho&-WB0! #ax 034*aAV~rƛLSgH> BgZfɈG{.MaŋI9Ls$>yglJBěin*4Ts$ggZ["SSU$,[ JBզBHQ;ZUvBN#6R1$2k˻ ͳΏ֯ŘkYtt: B >U ZX+8_:h]o0&ȡ ecρ_28kEVBK}=سv}B7sjOf:N|$z|5hL֐J$MU6ͥo9cq%CndBSL\Z{^:Yuˉa~ʸn;Y_Kx?~S}Y.R=:=AGo_ѥ|oy}TG-ώ>v]3|f moWgoWImמ\i%k~YߟXa\8团[ ˋ.?Ey ~~kG&׳T&7 T;Gsef,E:}r~GZL/B _%t4ye&L?*gY'C? }ӈ/1phhsעb '50A&1fS >h¡7 88pB^v})_|`F>6ZByp<;1/ON;rZ_Ύּ*w124bR7V٪~Yfi-ٝRb%:^ПTP#kIȦ)fUtQ=_ý+Y`\[[r~v"㥚, Ӝd7Tlmӱ7Πvl5v:\J`s?&I& :?>;;??m;c Y; :{R;u+%l:nh(lN8cF-&gVnD//NV*b^[lT5dM*a"hSB |R֣ 1MRa.=mE{LDJ1ݚAhD mO{|-8}<~_|Lv??yR˃Z;oߗ:ofoU/l59ynޞӓtPe~voG59q)]Ԯ tH-rPo/ߞ9;hSGϷ7t p/MTV[yObP [kGt5to?pTuo7#Wf?}k|S?o??%ի&+ Mw٧j9P>Ft.,* Zr=$L15c1Nn`kV_t\o\}VbnI2ӧ2=7y cMݟ>%ˉ54C!rќb\u>ا5p h,BA hde #7ِc EK ҧq}9v꜀9d,Fْ^ S@q5b+$tR+D[4xCu*J#vZr5_uyS`^Z|Ep XΗJ<󳷧'Z} xNQ'0CZL2q5}w*>g* ⽋6-G_åob У=5эwlS:=O }h_s|P{GSv+on_fSc}rzdډ;;UhU8l%ᆵo(QgJ{~f1&t4νOߏ7ϹKqƧ6 [ eO ߌafc- Za_T$P{ԡ2l C(tg>ךD4l= }[쳣8?u#E.IɴٝVo|(9}%0mnntm>'lω~o<$.^M w7?8(!e`˰42!7S]m,lTkK=ݖ)aZKטш2F.#QYnzzvpuHS-E81Ey0$m&ϙ*Q4M1>4m4{8kTTAZ2+m5M>MFܿiXW>. W qbc6~u_klhPBu7ߺ$6W>kNhzQz'??Rmq4C{&.EɥQ:{S7'$MSd{7gCJe",a|Lˤ'#Fhe+_B: H;ٹN6r)Hnjga>L[ʫkD2}d~Z s^S{--ܰvYmyjN-\ෆNYOqax %iM}n7̛=5zI:>P:$ǹC#u-BO<~'fPI6 nΆ* *FE~E9[Hk!(XHēЧܯWj([>NΒⴢW|gN*\Jk<,k3^Y=v`avJ4bx]֞<5:l&li4RW\?u7up*2]xM׶r.bc}bҷZ?rdlMY4{֔xғb[ߚ}ǹH,Bi Zֻx&jPSVL WE'Ugɭ~t9e` "aT>kjP[0=м)<8{=kJ*r$voV?ӗ&Az m>MpIވViM%"}s]Eq%޼%NWt֊j^H+lc]~ֳZ>ߒ 7m]}B & b=KDoP{Wf|ܖ46(m[Uwe.D*p &]iiN-3K?./o]:}"жu v޺%o_ͤ.+1I=].8RQZm|HAP)]r}-"ж]&>ۍui{ ĩ<.G9arvp7m.}pZ7uw-ZUŤ{Wz~[ nk;rpbж/ZvVa'Xe9;Fb'7mt9gנdK/𮨴>iiP|W -yKCFĥ̊жe5M$ iK 9GdWvZSb1O檠&ˆrnZtgSTuicČiu;XArqVzGwNj#;K NK0Ҧ[7@jWTPv'7E X0Z$Z!< >Wj wmN@K0Pɍc_:^1͜h84ǛNc0NթWp$+h4=j0ׄB}>LՊV)>NaTf;sY= L@ij2$>|V߯V[IR`أ$@9+2/q l? ^G<2>yZay7NenRfpO-+C@awM _s~xܽ+=MpCwX'd,qk&GbHUʹ+<&[Z9gm%ZfR"j: ҍ::0uR@8& MA5Q5Seлj xFwN^(ajL\EtEBs?-?%{87;=Of/I/폅!1Ǖ'3S>?/|8`KpI6a Qa0J$ũ*wݚ x'lF m -Xi> lX0Ջ%$?d ((߽}@~Aow-b _ *U Cͬ>5Fa9JfރUV 1ZGؽg=p3$¥Ğ1R0+X7S>!\KLJP-2Ge K?r)nr|^eS`QT17h?6tgYb7/Qs%o yc.iЯbv$qjluXruml޾7G7-dYjћ-*ݵ@A"BR_(IE <# F0"0Q{oϚ::֠ tCdeuԷ zId)8XݯI5xOYaǟ0QĮWұ6vOW``)"1HA)AKSszD] [W d<| 2lP\ 8` -dlcqUbϑgÊ ٱ<8c/#'2Uɡ?\ZRx:L{AA&Ir"lb -l~nFe'3y,ܹͨg)&{j6-\V $?(̝%Na|s#KJIFɏ3+ēK`ڡI^2G#7uCwnlhQ&Ad0:]]{%hwInu\hkjW0Yj}Îe?I -_@]Wz?\c/~y+8Y̾mL=`yE6͉RgRz8* _i`˲.Gw! 8uYHʦ8j' pĩMl+Əe \X"쨦]St 6ӯnr[a_Bwa&B1i)G[Ԍm1,o_?Z%6H 3ovpN߸f$x`x>Z!Nt+ Q$<|ϓHE霤SG͖O)0Ŧu޾/?5azOU[9ٔzEIzp7h`PIȵ̛8AN:h&x%6D"6Lс8H,D A#ҋ` -#|h<1hf%j& P[a~E0L e L*g,(g+,huˣ5J5DaQd#Ηo[`L}/ L{Gs1S.^< K/Xxc@~d8[.gmlH ]i8[bżλQ헙7$ysB:y-M4W| gb ~nt,bpaҋ=$3|[ #zI菾 4n|GzTqQQbx=d"QDN>hZȖ?7J 􋿋S>crބ1rҰ Z ZZ#S.Vʘ aqscI:Eq sp:iVVp&z+jw1v1_14fŷwv M6cY$Q蓊"f!f>uյEkfuYԱVg^8x>\^N'atܼhVXKvl5,P\4TMmcILuB+K']Z.ۖX)ht(9|lՒ37Pxh¼ oJoiAV UuDeAWev.TQ)%E5f*mYպ}~%Wyhŵ-UO&D5r.GaeJ;_!ȷTqkvuQyX)RJƦJ0*2#m{0o6:Qm7:m-4:q6OkvZc7=Z97W lV_~u .SSR#WS=b%)˰$OE:$PT+`Fz2 <6a+\8cDQCq DWzcZQUoN{۴I Uv.2Q {.uO&=i7]koGv+D F`d56H`uo݂SBʋ9wfHirH,Jj$Nsv}S]u㔞Zz?K۶ T jӎkikǍLfHZjO?}%z^HOl|sq{/.Uxjo/o/U 1.Wkj̘2{攤6=$ e·\q;>_.6%uY=?l>R͚4u?ZQk17tUHc/bjzЛk rrNUoRIlr݉Rg#m^9UoFjEWe򹭝zgWn6"ٚ7{_Ý^|M)Ds:[Rnu|# U˷ww\Rk߸oxJ]owSC5v׮'^\m<1:=Ӄʻ 8|8asc?|23*|si7G16~&\(p2Ħ~;jHސXr#f>*e}g{w[3Kq}hzo?OqoW憖h @H)3JȆj\| (f6Vgs/ru("ޛ ؒ.FPܽ IcL !5qas>q!0a*U ,9`Sl C6A:[IkC-ָҜf&u*t̒Θ 1zmԪo5RҤkᡎڵ)_#F|#kMݍ(0FO@RQrQlhb̨1 _τ"8K 9"2PnN@tRS='pG[83{o=Kxd3W=mƳ'8J܍Û\wל%0a "GDŽJ!L4*(yp);Sd{blD#0Q7H#)>2il嫓# Ȧ質%+ YRI<H)ૹIJ9iըbMFj[A)f;f _)!ɍj9c&7l-sC6n0ćVFn!QQ~X; 2ش)k+et9h^cO6Y?k]_0ILnh^Dnk2/2IcU0ZcS TGoo"1#wAudvE`i)Mf@꧌-1)46dBm1xFtPT :+9'8-> h%/ӈDyV7(@1Q:YAx 9y#8Fl kb2fE\ڀh#a{nt'u$ 0 {8D` 0iEi^M"[C(1< @M:ThY:7hyX|/b-yNcuN8.Gx:x XM &0, d68pbA +sd|T J7&dJopvk(&Z_ {Hh;tɺ+Tz*Q?b@2ս4OXA z *<(:RF"#4"I̖PKe$<@Icn0!vLpg1$dL tC =h9oL_9)Y@ ޡMpR4%a7i,X \!YG6](l ߺ AOf]%B)ٔWgE]%ٷ=5}9#H UwN $7Ph){`y"F8(ws PeK>@%(oeUW$g`Ǧ"KJi@<=np%^"/M7* fQQ<02ƆšJ6d-rBEؐ}2,9̰OԵtC@ dUV| 鸥>JHhܕUV@VVAyJBK yؘA¬ƂB"z 41H&jZeهB.aicC͛tG@^JC&2\d-y_n=2 @eH6Y𣻌˳"Y)GwFϹ8U]4\F dgeNb;>mFx^Ib.*=k#D.6Pǡ]ܨg>^s@ QL%r20!GA~^HH0 hcYD$[50MU ر-P xP7;PȹAA[qV @ČJ~im"8~PhEN .ntic \g$0[.R`?wD\AoQHup8a,+°R #X3 JؚNp>pZ4:R@;B:b8E1M31xg̸)# ðQ_u#{T"Zր7+@n|I 2 w[0/bM1>W`4_o׋ LZ #L|K7YjĮ,4vMw7Qf1x(NߵR@ΛE-a{7qS4lĊmgu4W-D UC \@7*"Tz#Ht%7wϨT@=.[! tDs'<7oS25' +E$ŶZ[1"A^et cTq:i#b5` 0r"5f@ D9}/dO 吐$KT;+t5A=}Am(C1!:{ Lics,%Y}F쑫hrij" \ՑLF@zXq !6@E׹!Z`4rMf(\7}!wwۭ^qv.w㏕EYdUvι-˟~}]ŬUI=P{o{U"UWc.aE\.>Sp.b9.>JOapac"Ocx|̈}ԏ蔝h(@B)E֝t%U*(ugv}゙ /}l097UZY렾m'=vz|@[Ϧ1K_EA-tE5V*\:鶀s/Kzl/^뤋U`S%X0gjhO7-6T޼TCgI;+s; f90Ξ^azpT|T͡b ҥH 1*N;^'TؐqM^Dx\@Dj% O6Q?l \i>X2eH;v! 8/Pk6ŗ' OZxҭuOeA PZ&z!Ɔ)jFTD$7F^,i'`RGI<|B \.cm]qlnhoEic9}D_?h/Hq5~|4~ھ6OaV'8=qZ?zK|6{*8BCx`{QwbYq-zǴZ YlV5t>v|Nl wD|.E7W{.-7əǺm/rvz4p?}r|vrahEn{^En{^En{^En{^En{^En{^En{^En{^En{^En{^䶟6H 3:bTsw<#֛'Ri9b\|w؛GstbRYi~VɢoRc Bh  ?b3]SGo/_vͭ;ɓ><.s̱|ϔΔ ;5}5=nO/覆X^W=7/y7]5ۊ35woLMfn~:=CWGqFE=ɡtЛv|S>= ۨޘ>9 Z^_]Mq_l_<}1Kcoy~ {Q`aǯw?ޞӃgQ%֐Er }a KZR ޚ>0b dɭ]Z`,gJ.K(b2d089c}Kjes!)ˢ!.Jnr*$Ti(cpϸ*OíϣKIߘNu"PH@l%w@2t*,e)7EL%XԸT#0.,4e}LlQR|Nrb}1 QF^W`>Ui^}q>c 6s.z.]>+ZS˳g,v݉_?YoK-xZ6Pŋg1[g"4`J*;[8 _rr!|ſ?z opzTͨ[<++aͻoup&4XKɦ;H3 snh~ NćRr,K&I%vvE.(u۳zCYjym(%-zI̥C-u/7W͍oaM?JCO]Lqࠇ>2nWFke}8p~y5m9pH!&Ls~Y > {O,=Xܸ1ٸ~Y\bMì1 Fuc.ݗ$Gg7Dt[w8M,麓 wN]aݷ16+lLi3>A^,t2]ut}whl]}CvݵYuA4&NgT}]4ʠ1gt0]05JJFA]]8 ?O?O׏_f?AS((&&rxZ:v9o,sFa=7]_~Ŀ0oO/ ÔͻtȕfE't_\.v F ,hj|F<萿7Bpխܺק^!}>r}}_ 26CaFLGf+| پ닝:.۷H"4薴2.3m F35%HP\$}#ojtzLՈt)k%&a ZvF9LQn=*ՎX7u Iݠknľ/^/!X/ד3FN [ tŅKBN[ Ǐ-V*El1U̞e%;4WSJkW2[ܰL'zq&JWe'2QJ8X{=]!QrpDDu{!f&Hʦ䵤۔L>GJd(iFYo4%gt`4(0AN'tN1)/$ŵ6A |䛩r^QH&C $XAi00/ 2$˴dVu4inVlOQφ,sC0IS E ȓK i*ch"٭!>gST|,4>1/q+]ǥc(B#DfRCό):@*psL!ˁqj"|cB+fe:yR0 L/~0|!ǒ.ML0: I[pEz&`LTtDBB$B^(B&0$=ZI㸐}w66-Cr9x̝N>Z!{ɁD $KVStTcQD'zViZ,&cnd)cla)N5$OQRr 2WP џz*LnE;zjY;[t^׳š^C?WEH,=`CW n){,Hk [?nf_}ċd)x2DJ$+ϙ%L%+cw܇Dtb3 #bz=}JCAgT#7)*Orh1#}x}[[my\$E86Q> F>]/d0reJALr6vHݞZ^>4~t?m^qG[ n43zpaU݌)|3Xd84XTlaR0Kz+ aݧ*UP ٧ő`ԣ%${.T@s4f*h$ͺYdϲ ~ˬVYwq)ֺ\pfO(4ؐ"*͵'D{Qi%E|C@Ym etG92Nz_[ؓ0G.W3ݫ:FWUݻWF.v h}a<}.s}?fc,CA9J)<*ȶw:p'75ayFjjyoGzߨ-zF Ы++*7V;Y?1uWNJ5QSjfo ,F41^jsS(wLܖ G6]F)WZ2/H*C-;ekTG /܄EdڡؤS2/+n*>g. pdTV)[,_q{*20]XEꅘLcwFxyæGNd]чuSEps]Ɗq唷'I7^Ƅ{r'=8<jlϒ0PQz1\.~U7sI鬃Օ+gI 2iC1XxNo3|-9cKI 1PR0 (C-Zae,wVzU(ýES.լ@te"-=mnvnv.,z*;VP?78:ݸjTzz5o$IŽ˭Flbr4_24 a]+g)]-J>ٛ<{v]}j([fVW×rO'0,]:{*.]N'pxhUP$]PKt:G\^Y&opܺ@.g=svBr78/PC9يS5c=vPRoj0JJ{mDU t(M> MDԕW ^YS'+ +PX~t0vD2<Ȼ,J p:AESHT:냳*NF$LXBI+)>lf^rrm4DJrL"i%Y4匲SM9sa|]Ô]]Mǣp;N@1bcX#Y8ɉUddȭ B $+mH,o:$) . fŔ(c$r0wϹL*&&-Sj΁N&1i 4rhjT+* ӌވ y9hQ)C`!c<{4MjP4Jb$*$6>P]T'ϼaʘ "1\,Eey+*qzFUNFǤc ‰%\3 j,>axciF!J:vD:6KN N.k&pBJȳنTxt6TRs:,.:c3|2JHadҜ2 {Ή<UO+h$ݳzg24%^w2X>PgλxJ:3E يOj=Q7@I 0hFEL1Vza =/{kܣ鈬ًu#6ٻ֞FW-4 )l_E21 ;̯ϩncm.ø]>:y$.Ƨk΋MFoq~?OobAv+zc/+! }26Ae6`4gKP‚7aZ,Nj5$O'S!`B<2΅HelP IQ54TCB;cYYvW{$ߓ$[+[(H-˧pZL7 oc>g7D'"2J,iVǠqԚFq.DJ]1ٖR6BH@ Dv޹&$4F h̊hXQAp&Ni)CW֧=\b38|qC0;4QRR C*B6B3|-L&X΄mFTPämz8oO{{5Z,.f L3Ty ]*{jgxм:u!!Q`xj=s |$ȽL@u+I(2OJ]96j2~w\*ﵲBR's$^SMXD ·` ((g$LB#Ր)0tT.gIh :Lt6䳎6t्X ^8z86@ZxG78+ d} (.!Pik )k%`D儎2hiUZŠT'Z-"Q:@Du!!dH( Q""t,%c' "WSU'yoB ɪ'C޺ ?7Q&eH0*q 8s'XI4TGA"EU5;V#sM/+F"/e>(| CPqZ[ 4ɠt@CP.ɤ@ڨQ=`N;Da zft4W?w4zS: #Q 5dG&Y]I/Gp!?G*|5]diiZp0O?Oր_p}G48vBg 3B}6 =UYV|7]Aw[yU;^Yj_3שZů^qm,㕟67Ws%k4hy8)-jzk+RA_^YDxK1|cPV#RIUJ$ /\J:YJRYUTL$А=f 3),8xs,PK'8D#u|~+fɫ7.`!ꪧ.4[M9\ RR}be? 7%Yboڛ%smN;7D8gq*ҜE<njlfVRbNҺ3ttQ%u?xܛWdg^7]]F(?5,;ƾr7h1|xчK}W0#\: r&E %p!Nڅhʠd y:鉶MV\κq]H&g4tmd~˯cS3m f[S7<OZ $*TKٚ||e &q? 4?`tf-ccff6۸*__vr άYswW;fowhb!MW USzJ- y1Bڶ<=iuV^{Ir$[ŷ(r;׸m'__3:Ymg޿g{+fbu-Oɏ9l`ֳA&E{f;{bidY=횁x6`gNY$J^cަn]<\u1o!IݗEz/2^A Šxmbg!I\#w924#^gΑĹ PhU%&@ f]-{yAڏI$H<`0QI}EBF{ j64cpx eˋIЈ1@'!w@" ƃjǹ2D&B%24C./Vj:F.|j‡H&kk~Y/$/ E,pg<"JsƓ 7.=9fHو!$h?hEdN.$or )R'J3^%)XH7ZQ#P8%ȧSbrǎpO8N?̺Q]v& 2ǙJBS)Sx_78H ztIZx7BJ;!v+jڄx$ d 4b6ꭍ H$ Cğ;DlpP陌pW8\S-4X"3#nrRx+bgS'kkdjm/<-NYxڤ/}Əŷ^ rF³2$WR;Ü ~/3;.̣/@)J]GaVÒS2{\ FS rŽ.ѩSls7=j'(Ϗ4k$UBG'l$F: VLFd\L~hxz6YnvvI4_tݥ?!-vBF;# ѦweRY6E'p.f-~17sx>[7jZ%1n6,9_/|~ui1=~Pi_Ӈ ?%wL[p#Ј-*D\I.1e4FDHVg"20d Q"Hz#o\aйDfmPtjhbB=˨)"KZd p9ٯfh40{7[yW/V~:n|#$hʊcn]YoI+bJ ,rOm<B[!ڲLzS(2m>: 1\n܆,m΀7 ]r_P^Nlws^}nѹtv$s틖Ty(C{㧹[8+:{?voѾR̷7+zI)kJ@L⽹ބn'FT A#rϙ%,k oG0!aء]١BM9N #gK}2WQO`2 jaebmn\ĝdn*,M("2HNcF1ZleF͝QFGR :d6Mf{G+ATKb>"$o&e4)E&{rWWeff0\*b* }8)K&'<\O]P_J~+j)1sGvT)$0((ei&гMΐ3^Oߪ.a7\HNCh%ihtv걼mZLoܛ$mv:>o_=+;?^:F/jt7/nw^_s SVay8>`2h;{\\4iޘT-ZWk?4lGn|)]-Ͽ(t #d\:ms̵#6Q&"4p+^ mziZw!D3 ,h>*n$0"G5^殿mbh=xd&#QHYu2LwZ D佖豉hj4BZ"jKQLP֔ ʐ$U55jxT~8!,4*Ű[)ruR7v%;cqtN?fE]A@,r-Z7G Vp).=L7 ,3A kѵ94 d1&`v-B^,}ZZAnv?hl@CQȀ>OyS!`V0/2B.DM;xH"88f3 BYGe9AQN 2G6HSJKc}G:FHHjbKFbR$S, A9,a#ֱ:tlIҲѴFMۥ@)րmڨb<۩L/6^g#wqi3Acn4 y ĕ6KiAMf7{Y>SOמ?oZ"ծěw]_d>!r%i Euޤ`h=-?ĝuu^^=_z,:WҺyIW[7yjyHgM77fԳ|MAYzEEhtˑgsB;fJ3pQ &:<[r&egB<-,Ti_ x5Ϝ*-op4O!  ̍h k=!Y4|TQ֚E !Ie§Fj*e;Q!(8vNxM\s6r6W@w-$vGf8EGTC{ J%T@ # C}OCCҔCbo Q,fT8bSA9#^+EUR )E`,PuGdT 3s,HR.!@ZrS2Tz;#2W`͏\Q>sU\AsŤ@I ~츟?~z^ g-  (FUV t ܡQuC(!0VFO9̉᧶Di1a"T X/%dWf&7a@) z!IpER*%"^%4FPh0 `52E0 3LRa#2z2ga.h`si ~&v ! I'!BJ0H9#0A*A6%gi|6lH*$@_%FR˜IʼnƌFHnP9V3CjfI6sqlBh" ¬1ԉyfE1Qs% +n mꓥCgjY䈋\n盾sRߞJִmɗZ`3ԙ«Cu 3U*# t6b_bbo3bl Fa2+1l_G]]=^~0jmvmX{<,lz,cN^CT)[M77;ˇ?Cood^A;g˫Ї8lP{[[Zb}iqɲ7z,F3hfbbͬ ]WgXRY)Vx:QcZ"қz:NrzK;lnJYoNdM޶tj QWo!Yxc^V~J Ch+)1 ^__S~u8 ;~ґZ-UT!Eja-aFx;6~3zLET(=㆘.-;JS:QGtD_/ۘV;&DQfaw[S⤆WluZow90[au/ 4{/Ow7]^砋%.,O|q\_ԛ.lV{<+ވ3y| ߜ|نF.&V(VC Yt9=tb٧PB3u׬ O} ҹēK Ş|*ԗҸbm_<>υb{u|)'g'#z}JWgR(SEBAe1_h7&6,ٜ`tUݝR={!V>UBv&'9bY(RjT09 2p /ƢW hr^=%]7 ?Z~ (  @\)r!pJh׌c3agi&4 j൞w}, ~PH&٪A\},Ѹ xM S dbeD$,K\a0e@&nfY䎠`,(BNso D# ;cJP>'cƈ GAcO"իHEzww/05Q #A-MUV\1$ 'Y2.m1Ἰ8* Y>?+Sָ۔Hڿ7.<1 vיvZ:ag[I]Y pꖜaγ~ݘwy*?ZuAJe:[m_(b_dʥS9vLR0E^˔ζO0w(r!!pb& -,B\o-wU$:WIBȍc,ysrQ6n}#pr")?X/1?q#&aT_'Zge`_;t`qL̝ՉMY'Jen0%ߘй`-#1xeÐ(tiQ *C['`bNqd]kQ Z=!Yh2j?-S9߻=C֍Y2ggr_>픮TL)ogu0FW?\C?0QW}y;8 p&& XXT5Ǧ`-_?uVMI+iasbMDHK`3? pWVjQ՜l-!?^~` MdZMu>q>?.tܝ/L"_%ޱYłki]:ߺv~1Sxh6x"O`݂{EX"BhoiJ@<(dHxbٻ6$6`\vZ֚Qc/CԃC'def쩮ǯ]IzhL4|=L#@*}p؃$XIZ-9f ZKEädRm MN6u2>}f2>4O4 wvk' 0?:?Ұm<_M\gC'j黯0+W;U"V&4>YW+(R2>Wi|8\%;jZ|9:l):#QNb6+| &Y ]N(KJ撡@5[8K Ƃ-"iM>ڽ@)<Ik8tùCxl_N4{k| hr~Nck*ݩFAl UE2"y`PSp|1h&`U.fŐvVHTKngKp/gT2/ PZ QNf }~M`h;O6-, &srY:H.( *l&"4_]ɹyL[G%ta([zϮ2N|ojLa56Fb\Qt`+@f'ShG2LbH"Ȣ#9PWg+0 ٛ(]&,Ǝ]g٠-t@JɁkw\<;zfE>uAԋc Rs*Zk{t't ]J&vB$!P:uS }/o^Lo#o^+ _ />]|n° =c39n EUwOߟa"v:u[uNF,̬mLHa+g19N| x=V/2c|IZ'GoN" |&D/|ol Z0vzvr62 zzh3WD/|+դԠ,^_ή.S=8tZkL㗾dwN/7aj.mE]їUoZ[ˬmeMY5VO;xғ.٠c;~x Z'P=6WȫbxԺ`-*RJLlr$RQ;fkp57%ZxweuB:dVjө)FJai0fmaCWgwWF!~ +OnFɐGGw/<isڭ9mU|l6@"ƢˬX5F|cB V$XAݔ&1'$Eh^ILZŐQS:v/@EnTĘ)נjڹ*$fa]u] vȫ]1_]1;:7sERn 4>$sW.'/MZ=d ɛ0IƂIa S$ AI"t#g,';ZIKx^ ^:}ǫ!~5 y aF$d|u|1(m\ )[l/#_vŜut4p ܳzx(g,ל1oߓ{f:%כIm=tO1ueA7g8 /wj8{}U\Sfc7JU$uC@DfDS(#!}0Y)#ȥ46s)p:>dG2e)J&BF&&g֨3^j['v6 WMs~6V&7'= *h A=hאҰYM47E3iJt ),8㑒6)3!g)3db.E(:#9T{NܷWٲR;>+ucxy6gv>ce_2zql9K/` i2%b+ ,|TGӳN٣YЄ{zlӳR龧7wxƌj3J$bCQ >4E*۹"KZ^w{@6tr[$Ĩ0'RxR.6&%0YLDJV.httynnh٘'>bgnH7>ukTG|#늬3[O;!G> '#&iLS07eG4h6P|ֺ"OorazJN Ymxd~ף:1(e3zBYt$Jlf6{kb"~룛%l ?! ^=ƶGZBZu[%ۿ뺶] 1nZ|qW\߷(ً-9%;$o.Ҏɼ@ԗd=q4ڎ+;dF Mc@|X8a%6Eb cZ'D y1r[i|A^:QQ[|ևnJyxda%8)1r# 2CLTfՅQQE:`qyTG9 P2=}Qqea):*z|ŕȑ Ltxr6"!HcrكrQS lS0f9W8ɫ%rcpAe1)dduIp.fedGK!+SJ ČuYkz΄/検%%C PB GWOd6Ugzȑ_iev0 3Lξl`jD4j9sp)vdV˒EيjdU},$C6'ܮxl_mxI'm_;iIFJ0R2bYSxo wJ!k){"+[z%J4bH8*lTZ&@,򡎖7F8 {쁔H].MsRLib0(g5 @Ё2-6|_ijwg!5VS?¼OgK.Qs>aG[ \E p8F +k;]$CaTUI,\L}RR2l8nn8qٛ?&/g?y{:{go ,w0&¿lفz3tKU\-߮>4?&4m[2i'F6h@W~?es{Xb`ro@mg4h*MYߺm9_ݿmS].}By6bf_|oFhǦŭGÞ<#@#JX"BhoiJ@|ʇ5F"£"驽 RrLFxcǰr~䦎E`0XAC  ,0Bhg6l9tɲStO팳ѐM&n9n{ ;YG}v^F䛗^&Hͫg 49mKWVINJ Rvy9!'ZHZdM˼2D/^6Lj" oUS"m,$Y2?Շt AU,_49^h}:[fKL.]HXVb;O,wC<{ 쉄^D/917,w_ Π5n0_M>]ɾ|{>]Bϧ ~"^ǧwN_ %ܔ Y2d`)y |?}!zL9f嘥: <_QF/,Q0 w3JR.L0J:},ra(ZZpp^6V@i6(҄HXj_gr;suauҡǴ"U}Йveq-ʯ֦rdw`Z1$GGN3lvF‡X. @/cyR vQJzfCn f88oEЀgA+MqXaR:fyV $9~AX1iDF4QAJpDRQ gd4͌lg X"wbD+w Ɓ1K0xЀƵHC4P(`6x!I*-3˿dB &Ө40/AEvTmuu!,-y8K9R!J --% ?}+?*ok_=ccE^˥Q 6&@5$.I1h0ylX2 3{&CDtN)RR%n ~$}rVPFaEzfoK< {$lߦ^uy`+Lh"4^xL`ܝn0En7Pl8FV}Ηj~ц.o6ka[xfECJ/L@sD2X~NDt`#D<xrc54(u(mlrD`@@G;#x-z8M&ؕj=OG9kycٱVSJ:3=[w|޻'w-yܭݓj'U䣧^hr]Q.IC+eM\ A6F@rzhWwMMQXyG/1.1)FL($j (̨"i*(tXD)%LyȨ £vȾ0v&C8OQ efu:bE #"E]|.N W0;o|8혂xSԋ:c| ;d&#QHYu2LwZ D佖豉hj4"]n-cCjξH\cbDa 1\J+< ^[(rA#bŬiaPDS]`p# bXXH̾l⪂. /2bb<]O'.Tb7Ry׫δ:ʦZCG_Kv,_PV|)n@))&S\zXoW12#XP1 s9K1=v8޴<2J#yX;\ߺ~&FjZPE#&ljU͝$1`szhš K3XrX2ti0k帏aqԱB) +z!IpER*%G%4FP[j:IgN^B5],v ۽oi+I)+/!)Q8$$TtUK=\cQWnf;[ֹ]v>qe=/ܼ*(^*uÚ){;8E]nݨO<e-b1(;@h0lx@0S'aMjd<)x6c5uh6śܝ́r_nBc(@Ă.6Ƽ-X5zSm D 0M`Z{Ka^d+ #']Mh֚sy+u rB%;۞~u>s@e;)]ڱ <#SZ2ZIjt5}woCCAXd2>qR-S:|.!N!lk_gr-TpsOjڳ;6 \o.gɛ˟_o O_~<ۋ`# y]Xgv3sP }Q ZK²:)3O" \uհ βG.}Ta}}[)y2*)MO۲&SqA3=fJtR?p48X EQ5$#:p!KSeqrF=, E=ʿd{YXR,fGe. #`=d &I DFNYB7$tcĆ0iS+&~Iz6~p0y >y"Ju7DOK~Zʣ& >%ppe N$ϖ?/'5[NiȟlTr9J-T* $ߠ?BYE&jk?f%-8#Ov<~傔 [OVY^oy뗜zk[¯ҫZ?K\'LKʖg趏+tBHmwy%8֩'jPdhoY+~Erkk)Ьu4+H4+h2V4A sg$`وJ.^\RzWߢrhqj`(u<_?|q0Tju,zN>[Nu|GDtT]HYlq )&F.ېHL^lFHozgٳzыKRV"*ј#eKVAi]-7^1{zo3w[UWم{`V;?D`.4m=BI\^,5cz&|+ox.$m ܄˘bI*YXko>Snͫ 4gЌFՂ! Y"RrXX7JҗA.xnd!zqu?Kp>q°}կn+PCF% 2S)Zꋊ2f7D m0) ]Wꊜ,vޗ3;o/oNZ5%5gPbyțK-wm͝a)-=UxXwsЩ$]TBճbNʱfJ>Xcfj/a <o랢:Uz6O*ӻiר{~g }=t}u=td!(\`!r aA(p-1+* ]+b,H`PʛQgdfbʊ彖/I"2}Ĉ4c_9BXـX:!R) V#Y=J} [c8mഁiga`{5}+jUga=mJ9Ep=D&$%{`ׅݧЁeX6՗N&h0п'ObHbh 6AefɄdQ(' ٢L jK EYAiOL)}ao\:.8mtnz uwȍMk9J]s+uw~d_,M]p[㯭Yt[:oY g[6ra@ٽM;{Ra2ooqws1w:B{Ğηt|~?5Ȩ#_u[w\ tsrqc*οf{D998@ᖿ*JwjfwX7PThCZRoViGH=7[oyRNEف'Eb1K]ޑt$ ?JfSe;gqjTF4N$EI9 ##Og֨DdvgwfjnWnIHK~#Mmt \ ? 8޺zqWBҦ7ZxYjM*_JI;  Q.f)Ad:aCԆAsJ:ؐp5"ZH(Di@3Yeus!w 1Rim|$1bW@DA:"f*?.cH9!Ѫ$—AC8f+fO k좺's 'BAo9 A-w ]g#u H /K k +V(}!0?$4evQ1;I"z-#][o[9+z,v%)^8ڧl+Z@DRm$r2R\N%C*S|32BR?a=u}0'I+~ {n6s+X.&d)&nc`l6&^StjYںJF5Um+*)T[VX7ݾxg'Sne|`2:RX C֘+;cl7T1)gqk-}4-dm~_˿foMj~_^~.x ?zНׂ9Y Rsc:q ̊r McɕTLf*-iß14sahG̜iN|nVZ]peU9Kod6Q}92C:ЁZZJ Y ("LDH<ai#n<5,>w6iIKE}Y dcFӶn`[0HZ^|J>_ac.w!%+HSІ";H-S«j^`"ᕚPF|8A^9#B7F@PDp#QED41ק5'GUug6c 2搭'9h!fIi7B*1kK|\̔RQJp+K,S:Dhqp9rg#™(p$Ґ:;k)M*Y It* '>@%NY8&lٌtS>RlJ3.C"C1 $E2QTg#gۈ&zХyx->^狹$XkvM_kBPU@LWWe:,u#W&lGVIَJ+YncdW/ޣ/JD5J\wk4x_% }oF/<Ü˄>ru&XWnNG6ڼrY{F^ z'vF:ԹR-T_::}: :Hd< )&?wiDwdEiEuU(HԺVHƭ)geVykޓĹ39-RVҐBhȁ;)=宷DeoP(s G' !(kY?[kqG`,Q-ݻK9ߦڍaYnUT) R$EmRhONE&iz*9iW10i-y B 'Aeѹ(PTހ2K£ n[yT%aruA{l" `3P.|H!."lQ+F"6-BV19F{AAFsT Bus , n 5z׭!.`b&Z#{`{3ܖF8$rAVc@z)̤\P^mIpλp[G.}ƝO#m9ˉ09.==ktTA:s7:s:H^n_  yV躂&i苈/'drJQ@VF 儝ݒK8<n|l4Fxs=[M6W us@Rx\ep޺45Ǚ|J՗c"ĎVίsj5R/k}6BKl5F۷&۟ L3.t>!qv  >¨<2&҈;9֡NZ?x7~pVP3Ήq߻^^MfǙ]uowzo>=E= nXs76v*Pd#~̳{MMwnpWF:dSMcUĒ^Gyf;> C~:J㜞`O{B}nWȎBeK_~W׋׏hq30ߚH\ ܜ+Ŝ醹*/ǿvGa?\ nnb`$/N}G#+VO 5B0-&m"(Zko57bzkK߿=WJhMUa_FhGAM1_dnjbZhMmtq;Ij72hV]g`lۭu~aek'!ɠs* \m$oRb8;`-ON&טS9Z"NOxr-q81)JgU6(] FteIߦVv~DDBDBt.kI G/#H)*/tR&'"e\*hLQ$) zah-ˇW8xKeL3RsSSEu1NT@$ڗnK;O&EkSyE(n+!s1Q$ VPglp? K£ @Pz$kE( pHZGJn1ɭ qw'sAiN#'O$j N#SD}TՊ@Ѳ5Ikƙ"#5B{4aI0KkK %i1hL9m2k˶Z䵊Az8!D5D&86HYZ()z* j"|B0322"f@&xNE 0p4" t A0~R'=;ڽ}0*S6HBsDB@F]F@ҳVqQǵ_'~3XDR>IatNVpyHM6)4 BI2YM5pV."0WУcb,ZovnT1NGm8.tJ,,g ލ}~ҋIU7MC\ $r,!͍;Pn7bV.O?|$+C8^y\(aVz]]ܙ7/G* <#)DjmK (: w yO>%%C@Db4f6j͙Brc.P AYrP]1r^FvwRޮ9d.qvY}[+In?o|YVח2$ ``hІHmXXquJњ&Aǘ/^qP)[Dxdw+s@b|VŻP+mƿaR K7BH`[Y<*W3Vk9՞"̞JMn#xcQ'B ./Bek]هKN>U~uFL@;9+I>j'L0ϣR $ qj|x1޹9гj"Qn]r')\T}sښ3uz9Bwvn{wOvW}bW6 M!N}]!~Zi(fo1mJ5mo䩀w54_Կ! W][ZL5 jDZ| w++$V0'ONq7'ᐯ.a\}؞{%MJ8*bp)&nw9G|lMfi蕌4kҗ˝I;z)eqc( Df_ =h_[1 B2*Z-s}&`y(׫P|Ǣ(z: 1Z5 :t\kAA~6oi_Us=mr.|l`Z<| z}9-ɺpq.j|nl[u eԵ B z0CYuuGդRe"!mue%iTJb+/Ӣw32C:$i .CtR+eB\%c"tJTģP%F[Uk=wO5ۤ78vc}Y|l 3? N"·P@PRKxqzA->s܅L 9zf\m#GEȗ6߫ 82 = HƺزW=,_bّljJjYͮ*茏YڢC.=R U+Up{urM)|V@ hMeclD40:h5nhk>$ xtb3ƒ$HEWKכ0k斆*2~Y$~Ny>Eu|qE!5(zj\~mRhʕNpf)%Hj\1Z /PB *gn gb_m+-{3(u,E$eq+K :]LET6Q>&G Pt3} 'ɐ m(IZ$1{WdTڦD7@ b1쌜Th>K|8=\lqTVN#GC.*Uo:1=^}& gl7e!\OJ~XI16Ee`V0^`{0ml~Oy)ِm׻IFw"A+"\cJ{HLJnayΩv:(#~A\1%X%zdOoQ2 ]D'[}?lq.̐EFY`"RVe!:+R #),rӦ., 9KDҬ̅2:'ZZR˒H]Ȳ >ߣ86cGym5vK6[>sȓ5%ɅZ\RVO>AĐ9$< \NrA38\ob%RT` 4de@AJ``1;mqq_*-C jUX_ǰ'""@VP,JRH%kȯDSjR":֝6 سzD=4=FtX0% Y1VC4amV_.ZL*ԭ =B 0y3(MwIED]H )z35{)ȸB5jzl>ָFy/N,"N5{RwX_2C~Wpڠ KdEde>a{78c"Zuh9dLAiH ~pǕѯ 9Ȯ@ M %BഖdOUA>_t"|U@X»h: 7] 騍>UʣJbvRN+{;U#6_^I7}԰%=83'@[Z.luvGd'L#چ~^y^x}yq<20[Ac͸ 41jKAlE/̷XړO{wGRћzsk8T֑0=  = k͂BKϗ %=qM 'nW~fև4ehقϏ}#qTWS|5LS1.yͼ;eNSBP1Y <_}I=}Ƚ&ry;w̬2bjp]BTQXSEY%Q\@Cᒊ%NҎ sҴi"s%q`@0Afsq:!I:]&&KEY~ MN6ungS(21J@əZԇ%>og =:;:ͥs#X 9䢁/#@JL z#3%j~nj bM< 4BgW<}VOI ˰walM嚾.(~Q4"CRFYAȀ:YʂWAL,Tۙ6j7'],7,LD$-lI$+&Qvv*gAAi弐YU 3b,3sL^w,gH9ۿMY`Fd[-F#br0'0ƍ$l9g5 m)xGt)W#3aba0Na R1RٱE) bݲ*F!VgP.Nz?~*"J6 `ұp&cY Ƨ!=T@2RZP8:QO|#{!2~.cFD6ŀD2!%f+xc.{i."´լW_d_3y𙎅o~aYD Pt~L_<u7y)Jxt' 0HGgTKta\juF%Q!(ɇh! (;5XN—ټX E`PJrI&U°KZY0V+g EDR 2r"J02P :#ad:q|M]Fա8fU;y\%QP|[3lWMvˌһ,( PXDiLPJXlj6X NwIv7(3{֙XH_+J?XUxoL=~;oۛw4mmEjBxYT{A3[a}/nPMŢA5IsrCUlcA׭}a,8 F c.7|%EޏʨsF#\Ęr(:g_wS}$ ~wXW)婽/s2C cq?k-.ʻOʡb29:ߪ,^nzKZKzm䮀wkj-40ۭ[+Z<5k(lDz܅\ f/'w<^B3 V/;{{|2môk0;Ք>DԮa0>L DvdmqUgV|k}rwBས[%{oZ5wGl'TUzrO5:cK^ɖiL L|,oqC̜,~^Dl͜ 2I4T5 40{/>p3';֬ɔMpޢǓp}Dss<NPiXW"@'=z*ZN;foxnR7&I@.䅉%: ( Tw M.I!8*K튷2XІЧw^9 {T3x#Y]/6zcË#PkRx 5>RE߉5>d`LnO2L͞W{ۯ[+E h,mG MJYT1S0J@NPё|4^RgUZlnyhg< F] Nt}FƳr4y͘-%jR5r1X8QojOYeEža'}@}K-ZPFj$fƗNJQc+R(li.+ݎh,$_G"$=B*Cn].|\LG[}1b!D @T@!x"0lH4ua0@ɀٜ{#g=mkﱻm-g Ze>sJ}~87iӲd [H8Бem}&M`X\]Qs2@6Q1A ҨQgtCM  ! edc,M) @Z[`":d^55B>N!V*m CNX*HWLqXRdB3:#aHtW $zbWĵt~P}*Pj#C>h%.Q]̣a[qo _?{۶_!e=`ћvb h/8ږ\KN-3$%˲(e1ĉ9C<2M꠯<;ƍ?\%܊^z z!Gqm|}An֘F ]T)J! 謒k郯Z:QBU/dPtypǖQh3>ԛ<ۖDqlM+p:.՗_U!EfpCMGj\]5=~.ꔮ+dFEBWYZaX\.gȗ/F;]8mnn 먇ӁզfđRK(=u;Em Jj{5= ng `hB՞[@#y!Xj8 @Qe`i*%ǣPSvgU]{ג'{v<\uepZTT/ۡ^Kw59i*nݽe*׮Mh"sMz$̀6& RN!IJQ6R**kq28,M}2 c2(Bu{Wଧz7@Ǘ$3ƞ-}mŦ|`+D0#[Wg8|TSX4*S[jB:X_(tiCTHN'=HzV3&aqKd 0'8V׾x}4 Hs4h+9{tm4P21C7D/(18-(K$F)RP5Jg,+wYW^]x \J0.)瘓Z{"@Pň4(E"!hRM Ae*OEd2/i_ <ҍf\r2z ]vzϗ ~Y*=b[Wg:Ó*2{Rh8 XIQ(꤭4kVO(Rs o+wWQD'F:&&Nr&#h,:&%4DEwJ" QzVzxrg-'>r|=ͯ'll0<4< 0+.CqW:(s+fNcU)@"A.x^~C.x#@) uҠ0Z$B|L2:9h纠;zhJTSi-R< &(s:ZYG%:zYGR"Oτ02]qmAa vMHW x`#ʔŠXz @q85G~p#J-">J3WA &=Sy&R{Œb9\Ш»o #+T] Yi,6Zpל`ub@2ÙD-翅;ӑDrDgX$miȵ6`"93D(j #y~ '_}L'T OC+ D 9h>=p_&(7\($VJ38ÃHZɃ⌧/Du'^ABj`.;v)V=3{1эyo\ xN4]^Y#Ԍ R-=mgo֓u7ﴞ^]w|3y[ܷ?ynj|~ϛo8Q-lMMJvzEjN_:uӺm$7޽p|'h>QAegOt&:y9$r?"ARAk/ ZJZr}khE}PV"mHvV6! g@eӟCJ%"rryqE21`$VKQc`plSⶽ|MvI5.0Vv΃2˷fEC YmyIR(˒fů 7G_J8z(FYxƜ<,&8/_QFݸ{uO_OÖ euƳE"tf/&9fٮ`f5jMY91z?JQګO*P?ݙJu-Z~. Ig:Kϙ+%LثRۨR걺[@D]fc)g$HQK ()"4*Lr$ ZD"RK4ghJΔt%s') Βԅt䉗9eNCNx_iΆiČ9/Foܙ|n1;C>^]E$ ``hІHmX XquJJQAjXOo}#MϚ-9S֜boY:K޾y"QcM&A[7L(.Pnxn:kWʳϙ!.|SӁ$CTq1%ЛE^^ ond`gM ^:u>gZŬM筎8T-yۻfԮҩ%;{#nzKٙ>QH xQ{e_ƨ tG?"->ڹL,8..+%fOOG uO-tuĄ#ōc;|[0ҫ E:%RV~=́Hv2M7>˞[pai3vY'mUݱ?Bi3F,<1?^z{vǩ;~)lKj&&y|ܲAoP`07#9*e\I㙻0}{NҜ@tC:h٤w@yfZԸ?qv87r{ R)g)v5<( 1rJ"3Z,*Yi:)=Epg y\]Kp՞;Vm{^׵m,s7P{a$Oq>\ oj=I;S J|),eaKdHѐP]s߁ooۧ}ɎbƟ~g<1m$piPPH wϹIQ-Mђ(:V;gQb)"t{!E+|]>ެ%VC5r; 쑛dq+EBA"Ƚ T@BۍhU;(2wP%p/j;/0Tt&Y_49c`ZZ6LF3̉dk4)Y7Zop q?)X_.n6hOMD36Pb(>$8[xd` d~sW4qH 5E%vxA?k9ЍpϮdpfflQ@JQ 1*/u@3p5 |ϟ2L8_F.B,Owt ŅH5l5ϫhRjnwaIw^LC.tbʉ#ӍUW[o.ňV_7Wyl 0 9 s0 >;9hjR|73$HJB._m־̣$Qşp.f)>]O7z||0Yr}UFV:}ɶUUKnVRH^X"}tPR=Ũ>y_wW=T!N~`XapqdF~|ÛϿ>Qf>ן>%_p&rE6_7`J12saƊc?qȧsL EpSYgB"NoAqG,jb[lOnE(Q]-MFltoݠ&._eݿ/+QScB b3>ҮTP X='c>ω-Uq}MhpIr72hV8ϴ5D2=OX ,!9MTd>H蹒&Hކ)w)+c<SqU!hAw@ 4A3TFᓣIC 񃣺trWT;l}o٦/-t_ZWKO0>TdX382/PwjwG7T?_IUwxJGwZYQ+\qVjBv 48fK(ML.**\Jg*D: U^" *Nc(z;}n-p&϶廩CnXݼqTEf} oCMG3k{ \ԩFWt!^,Q7-H  ;R-if S[NJq&]iҹv.MTOB4>rO5~yG}Y Rx-he >ENJdPT7K)$A+tk}j}-}W8xKeL3RsSS g;Q8"Q RsdNq QJT$ex/_7( &I:cx10/ 2AeGjH!iMOQϜ{:;b+H7h'M's 5T651S:s$9g Zv, f-*$R3qwLg!!αQsɮ=9CV1HPGt%]OdWP>js2.l#S]tZi; pi&}6t7H0_YaJPaIjJ)i,f.,ГاsA^!WiA\,#qʦ8QB2eW+Kn`#h'vҽ[/TDD0h2u0#ZaCf3^k FEp~=FWCeL 㫉ph4TYTT.QPZ5x80r[y\Г}an[̇oV{X*L}ddX$P0C "oCثbh2KX#{c06}n;a;s}_&v$Mio_n $!aۍoG ؿbQ?,=#h bB$PD ,zs?y9_?#{^)uۃm,\uuX)W|%O1+L M1bdvv8FC.LDjxWSMN*,E% A{iN*mk䦭+s]uEYQq2\e\] |v =Gt#tdɐ{yttrqնd2/r3$FL YovI1mkA"ڐȿ$* A哏::}*jJdrAAy+{٨K\B"c-hclL&ĒU(i>-C lh2wn<K}D]X9Wj^/_wV#;Bw9nн"? ? r@E58y@800X1L4ֳ+FcYܻh,GXS4֖OhTa_&AES X2x $*Ł#erU@#ͦQɮєX\b5[#ڢتtF0%4\\yOq >KN{̕\}٨ak/d lbzb.biFMV6@9ʥ"A74CLC |I1" \)i%&g0U:sp7rq˻3} krM15*e+ҐwGL:F.L:F-¾iTZ?yxRg#2a6?/_]p<tO臐} e5/yE^JKe6ѥ(^}XDROEF$~np7qm7oFi$_Gܔ>?Xe*Ce)Yã72MkTuXX6Y![뒔dV\նrqFZjGXľqS/Ȱcr>bjO7^Nltfz7*%r-iLS]^?!C~Rmoui=H[az}2gG:ixV9PZrqyo?}Z-iuCoǼɞ {;ym ,7[s1EM>"Am4g1rm'&rm{ m)5~y~zZ.yV5-G.#ATYX6N18_hV ȵ@Ր#Jr=] eP5B,$NVL\}S"gw/}q~z>#7\`؊*:59@\-dQ BL^*8q%mhR@k@0)cE rj*c'`[Ob94OIȱ/N 5*&k2sԤs 24e)RdX=v+`S|iʃa!BW!EŎ%\**ZOjLfA! QGKVk%j3>@Ej1Y>X !}1F((B$Si#)gKylx첤:"]ڻ>{*ǁ;:D _,,Jcڣ@'a8ΟqSA[ D-j|&52fBު ꃨ5%6'X)_] EC:86PIƦ,9dDApȘt`LZ91yS󓝠WE  9},G<\ZiKJ5UtB.ӈ~Z&y*gοo:a~7Y^\mdu!:g^Lڝ 2~7[z 3ζ#-Ӷ\k ?BA=aw& Ҍ?9}㢉5=`leN aX^3ۍ-^a^g6PN"fsӄ*vl]AAdKr ٚXK3 OGlmdhyu_۱pѱR˶!o1w1 TL.gh3š άFH4:A5|ʔ.eę/3iq;I{3i/pUTK&#cM t(6g*.(třF2Pg)$\(D`I(XJvvƙyg>g|KZz9_nhȕ@JZ#~#TUzzܢHnI<>)>I7gK4Ȗ:.Wß9q0P`i' !*sB9iD?"7u}w#|௽;hBbxAP $WSMN*,:2x{Ŷ{}1c]D/)툴vC:Sڵ:ڗ.߹}AFQcAwX3 K1\u7jGxkm/ghd(~=:?+=:tp8ۍ˄]`~f0nuP?;կSl6M>\\O1s6~3Ai@V,qXMO+Ɂv+h\ٗ]U:fCk-HG[h8>(Ȣip0 ŢiMhѼ (K2v.-w)|+³~^:p6F(Zj`zx$_j͏~.ϯfM,3v9-+#ZDR?nO%؛<=\v(J-ځ~2hHL76Ӌ~y U#WܠE\ pUQ\Cq%djjs(QҗۋFr(ލ2Xp$.jjK0n/K'z? @$j亃WZ2.+K!୦:.27'iz`ڦѿۿ.d/̧w+x,!e7h)&ZK3AlPG;jQk6[ s (-^kSѶ_g70Jޮu`Nd̏M6o_NdxS^ d^V Hԅ\PTP] GUп '1NJhE8?V~KRfHzif_%pGy@X/mEq [ JMp͎Qmr\`' Z$Q'G#B4deGŘ | Z{]0x]Lj@"$MS ץ@藓ZV&[*nE]=*hu驾|VD+*1sR0idbU[w S>ȹ8wTsb{|Agv%$R.e%ɴzp浇*zT4N$̩8],F@6rBy+꠩R*T'fgِϦ|`F=?^QfQٻ,;1Ɖ%,ٻ֟F$XeuZnng4~G$062Ч/\6MWKm*_DFF@$ J\L8pl*t,Pn [W V8tL9(T2s 0p4" tA0tvRg'a؟|xwPDxP8)U.BHd T赋Hz " LY˒&eD#m:>Juy$=[hqEoK TH0:qG+8<$XHԀ$RRBN;n+]TCab$\oVb (T1NGm8xBsrY+>MRB*hYJ7\% H{c9P<;daߢΊ5_y#lR5M]˯w$rvY@BKse.& 4lŒ]uV:krju3/@}e2+ϙ%Lo`e$qB%gዟ|}y95g$HyAE"4mTɧd$5|BBC"ƜLfZs&)K#B1p.cXl:荔/,G٠8FQ[oWft3ltυ46ry_B""Kl %U@C6DjBpS}4 :"R!SB 똷xWqF2;F`"  β@lxk]q۰٣Kml'J<zu1 -"D-&Q)M m85>KKMR_5*,Ȳo:9Hٙ)Qw A;+5xoFηְXMY1#[s>g.̎1;|=|`Yw_:3S9:xD V{P;YnԓTji' ?am ymE= 0!I.N-6Nٿa8Ho \c65h9_05RЪ-53jBͽcamv3oZ[3ۆ;RN ԶL:6q;m{Q)LLtݮh.L;e҂Γ(N:\v66;^xB:ʛAQu;-8)S^R)g)v5<( 1rJ"3Z,*Yi:)uE3!(F<_x$b?a˪W=6^TAΨxVf:y?wZVn5s!4 8q"~=0%5h daPN9:ӊsÑkۣz"j~E @P9 VyB]eƛ[\~8móLC0lR~?F.B9\ύn{W#M浽ȡ 5Omဳ=dG\25sT0>QӺ޽'j[ͳl\uN|ln|w=9o/|XfK&P2'sӠv>]YnVvI?]g!><7_ BV4Mt$!tjziޥ,'dQ#p*f->ng ]ߍٿO=]=*#G=QW="܍ZqHX"m4zBhwu&gY8 s$?0:ȟ;׏xN)3P W`Ѫ)), 碸YZtw}ywv 䭈;iR?؋.nu/],_j|xZmb8IJ72hV]g"'jIR&*2U$`BJHړa&]L"z)!N\_BRBU+ RRO:d p)sXJ)%KKpU:K;+0S/TFPIy6IvHuTBYy0JH@9e]ЗB ] A$!=MTRF*:l~݇p|?d5¦h5vEgYeLcY9oObN!f6/5h^ a4 Ml{V!wwӌ(w?㕏tp)x'|M3 uE'e DZPW" ]bT$=n8;;u--mofLs@z=FWoX<1Vṅ+kz5P21C7D/(18-(K$F)RYL+vt-ߕo#27Kis%%sRkSŠh=61P"DBLB**,: "2KXGьKn&&b-5, gf>q֫nWr<799-'Hjˡ,rc&s=#ZBZJNIqaPJZr >qNH4t D"ѢQ䯘cJT)HxRG mG+f4o,~|*Ku,Sʍ<;1 >,%i6$^&+I{!9΅U2%YtͤL@PA%"ő@@tWw7.(CN(مB0n?>E Lv.[*2U-bS'J}k, ʠZ*Y ec<79˸!(00DF0 gV)5vFklf`p^L^%P%1Hee֗-.,mH!b"2PKdJu &UΌ uWv X% )5PJĒ6ʘpM̔"xVX FXV;X}0k3e=i;Kto}yQ+7YӶ +; j׽ygq J"-w/諁ڽtm?%8'o >su n.l=@K'p)W@%R5aجϢdA 3 d&nIȨ 4›J 7ZhfϩpW==$]l)flZqv7T뙣||TD1ZX\|ŗtm6#3sv,$RH,DMOL8sigL8 BIqf,l >>eN[jʕVq͎ȠwYb>2l`sC!C$8MQ2lM8sn ]>/^?~>]pUBWY* ]",tUʐUY* ]e,tuUBw6쑵 &I |꠶NŠ3'aEZy'aEJUO^I 9Xw{sP崍08UB:"%UħP[K%A%#maFYC:x0Qqꌞ/I{ϝw)Ld0"uB2)ƵGRVl3w"gNQ#QfZgBz%r&CFhj| ׁzi+$z$2ǂDؐ 9jG# q1fn :X7Hk朥F{.YShC:'Hd6>'̛݌7` ՑV ((PÛ`>'>C.\.@fa[ts4GC - QG\Ϝ% nlZWrC_ʲv"w*YGZ H2._M2$+Sd8*b,QGF"J[E\r/.2Ħ%1e䲈 ufj$ODb)z[\eؚ8{fO f<~z w77yAEZsHܧwl'͗>R,zP2&ɀKũʅ >S6:j72=xguikGUgrrh`8WA *Dm繷% x4MLk]_$qKq*ճeR&3њDeY\?huGC#As\C󻆩dwO h B)[2.kSxP"-45 2)EnwXNL6$A+B׍B ɴWC!֖8h7^e}5%@7 mm\1EOCdh;ُ37NF E3XݢNN4k!vT~B]9n*:#s:KRke+ٟd4Ydv`)y ѡ ۃ♦H=n`MO lDtxt)Bҙ51pj2& 14IjBKʳM?曇E]lK^'&a~n n0}Oz~dk)+m0/]l SK_:qMFw\$!e ז8- sy.V58 Z`0UYi$mLD2h 'A#C85*Ҁn2εltHטF6`6kDD! N A6qi3Llcm8+:(]% iؘ>p'Nvd-5$9+5Y4Ji道>/2n 68KIt&(rx+nw7_&9tyv7bg$x"ۦȷ7|qWܮRex+nysmy1ⶴ!;OR5uu{1prS N7[O :jC^լ+ν2J_`e`I|qbPGKE  VׯoOGJw-ius ZjeDH|"qI4m?|J7$L|tQv7kV?A]]"qwEXt0A;۽o{Ju љOwb/YB¿zW{32yo3h|}TVǸ&5-S`e^̊mfEq`#z޿}G6~5 !t_goM[/ޣee@+ZC\cw=˙ JDhe|ۋchSqX,*J:joQ'. >tt&5J}w9ɲ|Jᚠ% "_m !Gg5*H̸Du9.'(闺ruK>iƶ{Uvsu 8 r@%_b!'p)Wwr ]YA0c*gQ. 4љ4@2j7TTQ$Udd JMp^@Mk7-4YT4 RNԷ$OH;wӚ{ZT>*<Deud36Cv 9;J)x$ZJ&DTqp&94g3p&TT hcA:[AB0: Zu`2-5J8fmdP:,1 60HVgPѡ!vEQEng&n9.ufVgu8_,աHƲ[QɄ5P jCusZ USEOF!D,$ fm4I#f JK*>g1w RsA΃;zV/y2X󆭴BA*Jm[jERb YDrl'!&ec9CFɄ8(ٓ;hL2h1Ɂ"uhkksf2{嶍dĵ8ൽqjT}Yj6)R!)@ $!*[0t u*'!VNz8Z;㽡0li;ZHXG"\JlB OA#¨UڥkFYJy`r-1Kȳ2(I[ Anj?_n{lJM> B 6U@iY Ov(ARԝFxqN[D4X њwρIhnt._f2dS")9IE$s2JaQ0pZI%pS_yvj 1R(ⷪBTrZR`>6`EW&!DR4yzs*[7=^QifɏمUy 1¯E˳aq ̉6T0[ eټ[v/BM#18G:i4S^_;LT0 0d<:_ zsx2_79<*A׏dӨMsHը@>L]lR6$4gr]^שPv637< 7^}}_?ǘWuK8?m&-(ܜ[~ޝ}{93B/{i+b%MkI@Vg~ pR 2Wu-X1Z WڮVw54*fNg]AquSn`.u'!|L1iV bEM Dvŷ{>q' -HWT ]%R+ĉF4ģ(B' >N]G%/>EΜ6妎E{0ixA5a"H3^1F wÂot3 ɷϦhB37;֖FH;ovK4k4ti]՟_K F?fORPM9#H d85>{tvtѝmXQPێLFt4 VIʅ FIGE5[sNZ鄒1&~V@i6(҄H\#$ udu\䌈N<{/YԼsgh U9XQReH4;rQKXE(W=vƹIuf{>oE7NIÕ9>z6b2JiŠ;PyAPI(a Ke%"MTR#*#+TT!H%Ygډ"*ޗ87hh[6qqL҃}<ЀĵHC4Ph`6xYKO/wJPLZ"f#Re4>XFeA;eS 4``n0'{S3SGI{YTa1`X3Ɓ!L{+gj \Ĩ5_e]#,k6p5MLW0`fXU,UtX8fDŽ>E$;-20"w]r:;^}ﲷ^{%doW.]X%R+TL\Dcr2U[KQ|n}~vqf5rFBB,[JL.0"4}j]R~mukbiSv&_]p.U6>rhwUF#LpE@TKE3*/cZ6!*YH.t^U3G{tA$u6?nVM/Tm g%Z/GK!1tP)wQ 3#)/ʀa43RSٗ9"CNYոQ>|E[Sk\yfWj >? ] BHޏ`K`Z:F,DDꥦxZH0<)κ޸0XSKY5],}Kmyzx'N SAEKm^t']8;_=VɲozK&eA rA2My!_g`j@w{+X Z\M]nw.{\Ù=cyyɳ9X=S1S\[ DASI^]Joj#ڵ<7 7#mۅK<]5Z|6q_¼r$.{Io;ML-n'D|6'ِlFg S5~e*QJZ;4nbۛNïיU[ɍ:tV7 AߎyYYI&jVtȜ_3BW]lL;ݚJ5e:魯m)$zų:AP5wny; t}|IVMQN;ID@S]] 좢H9|ejB PWϳd^6MG4}h00xicLQ-YXd&M/kxo 1L { iTv>qFS?~< UVEMfagМ 6.;* ~G4#?&[0UjW[6cn`-)nvZڍA~+7-X)nOM^UjTmFvvIZa+ j"  |0πiG&ʚ|0hFA? Sw2݈DZdaQD?GURbAr[6Wt:I&tfi s$+gTs* `T` Jf'/F!%s$~y>=a4jjz>?.<ָvūldve{D:*Rpj]w&y̡ίm;/;h+hUi 7kE.Lk|pjYLW0ѢZˈRAN<|>3ЅTmHMPԄDqёX;5F=mHp%t1rKo[ݳ=F랋 +*_Q8ILcЮL0}0j \`vZ~W H\#>qrUR^\=AqE9^]cPHt]yJ #= =XPz J̢|V^7F>: ZyMu~7?=W^\E.W:4XD.9Ly3idAV9‛c LZ+ ^r5` Fw\Mc#B"AuG^%ߟ/ %6U8:9%cQY]FlȘv|~G_^;tZ‚"۔M٧5{_r qSfpS |BYtkJuTr:/8HZyPv-e-@.Qx߷ݒG/[T܉;@d"Ջ" `Mb@ (.3Paa(dkݥ/XaGk?eL'i(1 p[xd`гշ 2 9|{9F4|]n5 s!jI{tƓUW"+9,R@D%nKVS ?~Z/P^<{˟Fp*G] M6դ5biѼg˿\ 5lHm.{dN?aՃ+P}4K3 uףOk=jbgjb0@Bɸ"6vfٵ h:x dN`Lm6q$!7tm}vaYB6tȧY#4ktrfxQ=!Yf:%BOy]_0Qg)NN?(/!;ћ_|o e͋~q3hn#AIp6L+(nף~{>ߍd_/+6AsQizj9BY=˯ RubP#لŏj-54qEcCƵ%y˸=DKu5nh:/+9:h[LɭKf.ѧ'u$N L2.3m F5V$)gwN*0:$}hRT |~Sa)z~‚'{z,$!4PŸM .*e-QCh$p4ׇE"::YOd'+{*cv ][5#jRdpI$'dSEHf_fL.j3=r8Qf?kc+n3B`Ls:LTOB4!rӣJy;Q޽hy)\ג^FST^褄MNEE0zTсјHRHYS{Z !* & pփ˘f ,4W,E~3>NT@$qꖌ6~D<"դ(}*bޠ2d.&*$ N$<˴$ \+B!ìbȭ.rљP{pvƢW`7h'M's 5Te"OEqHr\3@О)MX2Z[U,I#I@i0Ίs Ƞ l`Tݗe9k mDP8:}UP@dcόUa@I|# q_!rPd]4|^-z0Za1O۵}0"S:HBuDB" Pa. V#*$!YJJN 8~odm nKy&Eщ; >Z!yTڤ@"($d5jv:\DiYa0Pዮb,r_[n;7 OFYA1m(oRyM@GR,dDQ+c 뾠0:c ^vwߙO7wMI5C"g-ܸʍJS%%C@ (4$i̇0 ,QkΤ3%sx4Ȁ8KNPXaˮ9-;Y*BZ@-gU9˕pns#7yFiOInBXNM7_]~nG5vqJH %U@C6DjBp`M)Ek:c.6!) -zv(=MLl 7  0xwh6voƿьPm7T>֟9rϜf Bp/N@Q8b(¤޾=ݻPu./u(}wq4EkT%@o*K>|s^fyаMGpGm+WV1z@M$*[MwV xM|}[Iݝm%L~Y;vX|?ε:]Y{oƦGY_N~_!V& %nNnr ->M?hw6@ [ lV_&r4Kಱdn@[-:{``  y#D˟*-,}*v*GԝTjiy ߁Bڶ˘mqQ{X RORV|tpUiWnq@RIuܲmS]')? ߝN}/qS'kͦ>?ms']2|f5<1IDn?.7l~^-1g-"lj1Rw6]ܙwʚJk:Q:Z_igmns6P+VNBG~>y,SPS+BkyP@bDf YU('9uR8#ًg >(_%byDfSeՃn^mڒzS9^jղxo?*ۜo7i|ȶ]rEBBU:X_  QUB;"ri7Jh^ForAѭLb;fNw'@#wb%?<\pJ&fZ>罥BehG)RYU.Z,mxաF,p \JKu"Kʡ;(0q h=61P"DBLSQTBP$@&wIo 4͸: ;9L|Y5G WWź}igz_7g9o)՚Eo:)YZ> o]&kzjz[¶Z68+e8-$DžB)i 0Њ9"9ca@'&:{*cҡ01qP@0HѵcA4) !*T"TQH/w[)" qƽd<',4,*aqklƶ75:,&oTn2}7.!S5e" *f."hQ1o`#N)@R-%JlnxHN\d{8b#Jx%f6161aAX$),_b49}ŸR{  v[e`hh&E-1)j(hЛ\&B .x#@#-C*Ћ4t*bLPh8 -'I@+]9a/A`MRq_H KD:HA"VPdp-m)8Iiʣu0Q@QaBsQ"jeH{};(lh4"zTv; hH${D2'N$},qb:`u%PO*`A&H!c'UZu5r( = JQigxIkcTb2yP|bx!.,ɹSU&֗5N>uک=]wZY7D$.]6o{oH)x/wMUۺöqovknwMg=4.Q7bZ0v}Aӟ|OlzX{!Xs˦}աE;ד"7?o)5zwRM7t:@c~!m|h^s“ZIٓKWJ8c#ʴQ3NNO3s#/8?qJ~ZjWRœ`h(S#eyYv|B*wy?cf ^ȮK@5mBhSoy{6ڍNjZPV~]#:iw\磟Kͷ5Qyw&`wGmOV?0jO4J(F7rzv5_fD<u=#^L%ުVum^<}PPT9۬;+b6iĎnO뎶m ;O8֣@+{w=} )XJ V8CQ@>8k@u22&gb! $%hW\[/, !NOPE#[* #] :캜Mzwq͇#N]ӌq'jTa}X(a;|k!>r8aZEA?1rk< , !I9n00} Dam?.Mqs.AE`fGmiDQ+90 'GFG VTXA ՌMވ yXp@#dߊ8j)BYm|ɱF8$(7L:X&%!S`KAtIFr/Eq0FU_OàVAc.iY AFf 55BИcQgz9`kzubelYfmaV/,:&K-_RtݾPIEIPIv Dv2*댩 `\啐HМ!pC>{ \4MpS'+]ٻFv%W[GJJqY=w7'N ܆Mw'U~`e(huDӍKVR/SX&W@.әg~jL& f2t9![d[b8qD<0-q/([5(cQG|)C¢BR_l S]/iMK).aGR qXsgڂ4iGRI=# 1!4}6*r $ )rcS\#13 Ǩ#A#A{$U2"S lZ2zR}L.h3V "9cdm+Q=#r;޻x R'VL`7מj+=Lmb@&Gftcb TȼaB咧:M`CM\eY,ZI0Y:K>c## Z:T$;q/q6Wq'TjX\;k "_)'$YȳBzv#za: '4iiVqHz fH)}dbn -1Sʏֻϻ)pUEZ|=dc), uUS^)t̥ymrJ㭄L> e Hq4Pd~! `mzȭ>_^kt3>(\d5 f f..L?.5(Bm/-+Y /ۊ$Ɲ!-aLhV5AUjP7ԥrBՔW6YRWxj(su򀫟!r$e1BthLxFt\A|Xi-cZ` `Vw&iݕѽ~LSzrX-C-V C8/~+gl 7=8']+ѳrN9!:>qKd蚒' })c 3iwl3 (vș #B &8d`&KYǴޟQv1לa7޾t?==ZOKQ! fLOBZ\zŗmoϫ^pf訠)bceMtڄ*#Ǚp&ģ)dl,2Eg(UV!G cGbiL+δ 9A6T=OG ՙ{!Ut$8¾>0H%XZqfolq^~j ]-ge8_,(Hd'^Or]PjCUZ UTj)}L}T8(oH,#0kDT`sl~Ou)bK6>-R FƍgzV/E><,ŽBCA*>!+X s2\WE\8"R@HK:[~+U*U庪\Wr]UƃuU*U庪<~SRTjZRUjJW*5_|RUjJ=K;R !]>Ӹ8uhܰ Y!qdm{p(!s tc D0 QӤw&[:Ccs= _JeR6߯ ]_=}]gjB7usڹ@rO,Iɑ@M)z༷ qYd.s.ؖ*e|OmłXł-f KN _'_R)1s)cQ4D !i"cq=䝀+՞kن2J19xHi!D=[M(J#HJXeTKFQ ؘ:bܨ`a2[E@ϕ )ٔ$_ M&wlsH}'%foGqɹ<ݛv^WvE]1.(u)ybI&aZ{mR:Y9.UfL^aVڸQq DȐyA&CʠE4GQ#" T'MuK B1>؛~"-"Z^x 5)fmb|!餹 C#Hg8l;P}iuL;IA88f e64ls>QM$;P"Tjk7dw$a 4@6~8RDP) J1V3c= cr/AdQH12@6QgOZ B^~ ,)3?U{_pYll޾PkEn涚ms Zug$ `*ԺKfz}8G .ڬw#~l; Pv}S͝ mnRPNu7tӘW <;a٨Ԅ^|֞~y6t]=vܪo~ \p^tҤKZD+ ē^5RJ߀.fzֺ0md*[ˍ*UzJH^*s3O9}yH@H0iΐM0Q,o|4kPgex/{@]*'(gRl㕷MlG oRPK:yϐNkD^T!:eF WyLh&w(oH,#0kDT`s(Mu)b{Ts|I= ~|H&{?^}xbKiTT'CI)5CI1EZǞS4&I13 kYɋZjG==YaX.4&ec$@qA});.vԗ{KڵOgjG '<[w$X N)RR"1Q[ &ZPZ\N*=#/|ЩfP(cK $cH6wK&?ZYÅXezoM-k-]$?g ICCKkQܼ.*~55ʥo/;^JE9hBAANE))R(6XΎ /P9{H69$t%]0|Jgud(u $Q2 @0 sC3LǤ*3ޜYw½u> U|(Mn[ev8uen6Ϝ͌|$|a/Rr >$"nKdkHQ8 U4m3dbXҰ- <#a6K+im_k..f5[#-p|iD7b`JI%䋁ݧh*d\<22ʈfeG~mCu3_x\> ^Fe\JdD&Rţ߁;66h;o*)趵O޵dJP Rʒ$͗P]YIY=6ƸmpZu<0i ϯ]݌9LV=գNkԮgHQ{>?.u}Ub4>.F?vJWu*'~o: ۅ\;ѫo~;>}uzӓ`q~vͻH`bcޯOlF@ LSK_/ՠr z VoamDž ӯZb`rgB~>zG547*н>u댫|qZ:]Z(!mT3=]p_ߵ&:]=N:϶2>$o7m~A["-v6*pv2ݡȴ\X`%f`(;*I0(ѳșI+PR2¤dh m!M$i l5BRܞܹȹߝK:tKdeCg|T,(->XƐ1#9Hz%z"T A]ҫ,In ^8;$8V r4\,pC#'nӮ,V!#&>wA`ޫ FXA+hQ\1 A*!fd4Yz#gM9۽U;J Fۻ #Z)VhX+4qC< X h)w*53󿖈΄DM^ M9p|ٴH0ixp7r'8ia|g7˂ ",&` 0y8D@io,X-A*xGҒ8zQYO|#e.&#MFFjd^3J,uт)*: aGc2PgioEhKia]">ֳMϩ!UH/0gJh1uԠo`\X4@Nc,0џx̜ҍ.wE:zJY:zU2vAVGФmo!v$R4!!'DЙrnj-n'f"}U7?,cc-b$(B\*Ty* Oc"Yx{{%ݠE4u0ѱ>X%+J[{67=n/_ԑf2jbT3Cs<% 31Hߠg EӸ|cRlq%8]bvbR g<9ܯJS䫬'y7P?#z!Upl# be}0zO;WZD+CʀuZz}=TL<b>Ŭ Lۭ^4HʼnZyK{oҦyOB~7 VfNerr5`m(  ?W*LCw{E Z]Or)d.,Y0O0",rû≅b%Xw,5/6L,$SzQ2}{?0LFSd-KZna oޓNr<܊r62In8HԶgtmzRMTUw㹹^1]&L"ZFe)A@`ܬGg}鋜5sbgt5&6 +0:&rN;b`xj,DYTh'GM &hc&#s`a9^ ;Xgh'KT= ΂@AzHהzD%@4U,qXq 0(g5 ࠃʨz= |,k8cĭեqះaR!0=p̾Ut.&dTKV&|4a/'řzxBѮVÛev[!UÅf"XM>UdXA(dSD/G)(훪 z lM*aW8l#fr mNCa8Ic,>fK>u)\V#uWÃ,Nx:uiN{`SN$ų^8Ƨ7nTz2yZZ?%͝˫Y{h1[B8#5dzffٹőb8pG|= #]t CQsyV9a8`dGDW7c.UG%hӇYf*-9OK]_U")QU ߛvg?2:}}u9D%X`&%.ؘdb9P97SRx{ˡ5\ki FB4H8 ~AxTkX>-X1ܙޮFaTu[6Xg\#w⸼uV!|HLt{K'wVN;P2>K<(FۭF -HWT p]%R+ĉF4ģ(B' sR;l68R$#Z{1[ 1 l~RoQ?gA!}#o8S'6i90OL=_.ӕ|N㘽F{GIb40/!?%(RdKJ ,%%Oɧv䓹DilaH:iBJ0H9#RshD`c4x"o6Q1G EmR:2,bҒ2X+V"D4}7rw.Dq.Q#|`؂F0>P$ tJ[4Mic# }pOo |NĶ ]O_ث?_5ǵR{ʱN"L. f3uߍ? /foF.O8yBm)fTʛ9䢦L Q hZjv?ӳwP{?N\1)B2L ^"{$mznS4<poMkj#YO/7>$1A_%eR!);"}gHJy)k(3fMwWW5U_ >too fg~9`P4}gz3N:{F?0/z+Ld2{v~f`, v37BA>Uz_\}~.&f|5YR7ݘB Lb o ub6e O̕fFzAf/#L1$4 &JP Khu~eEF %k'/Vg=wqF8l+x_/bdC,,}eX¢K<+aBcV!Df^kUU:lN\;G?Eom6'#ZzRLUjl8;h4^~>sm FkT@cbƿvGWzDCj $W&qt};X:[mnݾu6kF99^x2,w)!EEf)9̉ީMZ+썳k;A /mJQ| &'hH.%وZ1abf: †kR"l%x+G@zcTZ:a6֮N%vqEP`TvV'O%'B?o[5,|S iON[gi[[?#^ӊ/%WUȘu=V1_|h4YJ!{۲p@ &O&@Q[ү <=lʒ 2=m$rDl &͵+RU2R-c5q[((x-TgU7)[tN0sLn'qzw4jr2-v& m>NJ hѐZ#!΍&c[Vbe> &C8%Ibj.L. pN爙eJjb0s.OjW'aVGkv ]f>C!ô(̶9J8Jit2YT1ì1XŁ@!CEGl 9Q 6jXVIޔUbj4vN5q֩78,T,bT+[D["0VrmE \\L@H!Z@6FFz!F~˶5״״aPE&)~jMNNUID^v#'QQ sjTL Mt!sETȔJ>c+&ObƐ ABF`d[!cB债~! m@׭f介>za{Wnqx󳛋y=9|*XzC;֜]άv{ws: u_zzHcVy_w&ـo}}w!6) dQ~A6%.!J8m~l yg{5[y-RNЧ6ЁܳD*.D,w^%! }!Ҭ. .yରǙB|8i\c,'Dk⑦30 ZdǴ?I-qvH 5]]{o:mC*WyF' iUJـcTJcD" y ErJ\<(Fsp3. QI9rY+mRm nѹ.7봊+cC6Xr0΁ْS}v*qS:>O֫{v)X$$s xgc0{U-:zp `e&0'2{;!cG~GеW۩ 胺:ԇK>)EOxcC;ɉ@4Td24n40}>602KYz N.br(֓uZW+Kg]^\P0nRkF Mp0# =5>oߦIn }wѿn黵pehְetŽH'Î*znݶ}cۇm8!-R (kFPrJcQW@]*# tl 2\*)"|_ۍW[z X#yIYR2M c(Kt:g d1tS`oww_Lz%:>ߖ5b̓fvޑ0(Ԃzi8y.H w_}.S 8KsvM\e18@5p 3vLA}5pܦHf6y6  [Tg>`-` d0%#3uߔBJtSyL6Α\:C* yjm"Cц,jsgZ }Jp/e[ !.y } 3m!oG11 !fkqi/'m6og*83yltRЀq̱&ymb ggǙp=!iOg*,dcm<섶Re!ЩCR̅#W{)gLp¹mC*-:yR%OB'"äe3g)'O?]߽yHbp8_*%]$rmuAo UQPT5CsZ ǫG«+a{J`=%XO S`=%XO S`=%XO S`G͵u7Ydց?8EYnf oo:~r7_7ٴ>;/)_Kk1'08~?T.SҋV2$IV~e0ou;bw뎾97?^d|\f }8H舃PzZϤ+ WCUd8!:keA4,g䥂!><$2 ̳A"yhZ )fp8E@ZhcFA f% ^/ʲ:%Hdo&3wȔހ&4 *5# -JN},%-%mTAyhkRA@w"[s=k-I$KRAZ< * L/ykdFS҉L떤 A&]dMKFfmHLI:s [&Fޖ>3W!Vgבcs=q=N;zr_~ m7=(5a3讹WP̗Zuu'[VeH(,85ÄRpsrUQU6xqA ޢS[y6 ѓBE-=)U&j'gY߁Լ.;`qWO rOUH+zHUg0cґ~&az;$fC2 q{݌Qt3OR׍9~Uu)A~h.ܬKnAa-O0|E#y3~Ն'ftWݯAVvB.%x EMVxNW{-1_wUޥw`ZVf q LBb6%!@1tY4v>-q[0{Xj$bjv]? |.#Y^,NFE8$bЗjGyTϐ)q(JJ5|9]Sl"#{MGЕ(hЛ\&BThU& H*!2AA2(u~9H:ՠ]۹o>\6粗{ z[㏧jDڲFF4bdDsx^@,A2$`T>Hm15HHvB5H A S)MyD?2e=FE5'H";Eaq]DNz]bt-Eb3r ʘ_(%Vf͋ܥmR{ĊuVYHKid Wo]sCfj6᪦Br,1Kr?\7‚~t?6 A1qR!B9,o"WZ{ ]ӫoAC% 0&ocΞ-^NRY߿jp BMƶCZ/5o i=4ƨkcP}{#r^!:2D!1QFe:%*EKsk)vXu\&-|g{8ן߇9'|\+nln AzXIw&؊m5%DKOL-;SLk~8U#_FM6N]HI xc}oXQat*((t0+gdQ(2Rup^@tW K+愐 Fe6ϝ1xqAGdQ>:N<18b}TJBD+HHM>Y@}2@ Xg\FDbHdsmFZ/|Q=bqΠ85%-OD=kSq:Tի"ES«h_ ->`vº:ruj^AN7CrrY|7x=Kzn0jR5!*=ʸC)fFXkYIl 'sgJA~4 t!iͅHѬž(%{}[ @쉴4Ph `&HD :Z!6NP!z4l7DgDaRk(c$rTb ;\B@&L-MђvYLPϥXeJzDRfprSTq2:E G e Q` Dd{8#pƓSIl^J2,)yxb|2FR=w*0%OAhô:iCɠ2hJ5Jo@q%@ pNXy*au.aulrwº 밿 gL󹨦4ʅ 8^-H9KDrM1 k/2l6h# X R3r 8Ss>P#pH""3[ ^Jw.rF9l\g gٟ:pz\|,ҒcxƏ]ӣٷFGђt㜍9.>8u I$'܆Aım׏uQQ>$ AW dr/g8<}p4Fx~}[NM5W usۢHrrzFnu녓*8|yy"UfkE֭YMq*5R^ۛoUzv}QBv4:p=d\2.+F-OƄT, z( W8xKeL3}7rҜ J9ktGKΈjųGKXa+ߛLeCl|!s1Q$ V i00/ 2AeGj(!iM'In2u^E F;i"\I)K id*cX(*[3Gq,@О)MX2Z[U,I#I@f,oYZ#gM9۽,RQĹb$U +Ĉ.quD| *tq} 2qqgFzζܺՋ.ހ'z =o5>!Њg 5"f@&xNCs 8DZA0w~R' f@)CLTP" !Q Pa. VTH@ YJJNB8ZQYO|%e`cKDEщ; >Z!y4ڤ@"($d5ZvI[{-Y=kE*|eSYC<$*oQ/P)cdQ(%Xycf}wA:Z㐧JlY:k ^.2qHY )mٿ H䴓iBBlB&BŶVvo-o#!%IeJ3W*J+IVOʝy?Yc |H ZR;<&C{%]EhhޓO$GIjZ "1 DY 3RLj!9S1G( b,9A]hGkX s.T\o;7\H`;|T+]XKT{V83[ꅣ^8HnPtW1Mv6وnwz<&6 H%E̟wb}쁚HT !tZz\u^uT}3Sj9Bwb<:nq˵&L;CK^C7Bψ\3:f'fk1C ]4kj2! -[%Z8bm&j9y"G\>\V% f'/'wp3sVʨf3ncݬhj'Vx4 u& tҡ:hMI(_ u;ޖ'-F_\v W2 :ż.&$FNIdF\%+rS'3ҹ.t́!7K/Z'/?bAa=6<;ՐzS9STJL9K#z>ߞKXI:.=o6a0I%?&_?o~S\Ј/&ߑ%%>ǒrRi%ns5=B=yD'u2k~nħW˘9K$V̞pP.џ9?yF.I? ·7( :?QF+ٌ\*q}f-S;͙ތJaoAE")˳JT# N_O}p\'|3m C߃,^[&Cְ^ٲ|3Mk3StD>qsQ7*OҿL'⢺_]F9 ?P!K {,LwtX 77ër/=ɯjE#u#썺R/*Sv^]e*+TW1L{2틺Z t+fU&3ɵj_R;vT]e*+TW) ]|DT퍺>9H]ejuuSWP])ά{DO*辨+r*S);t*Օ`pmz)?(N3$c?{s *uQ҆JN{Ps ;DsxbQ: 2d.I(O6`k@9n!,4v^t:2Ë"Tx TSX W5o/:);9^o9X,]_7etI7%oo!ש`7~T[~j)]BB5A4^\QID\t$pH&ON9g%|J"7HƼy8Eg x1 ^VnErMI-tq#OU+X?3ki&崾n=Dp% Lh'a,k39. JyQ11. ,uꢵktByݪ2Re^v/7%p4Sͮ^zy@ !Kk9)ȸY" ׹dM2E5!/>.W.aU÷Cq|}zpPx`S$HM'Ğ 5 roì{wf&ö+2ٗp, ]{1*T]*ԽًR4ISi˲},[CX#iP4žF{aO#i=4ž#juRn{lCH`-IP(^v EJ+[7J e;ZcGkh5vƎXScGkhnZƎZ Ǝњa6lhEcGkh5/;ZcGkh5vƎPTcGkh5vƎ;ZcGkZk_&`J[N?o7;rAXNq:@h⢙J,[P1<4{4zDYsɲL3Ԛ#${gdhD@ƒ5FHNi>%/5 1eNU33!Jm \$91& E6H$M!99Te\M3zߛߊ$єB#?Rf|qyhy%auei\bY`k5M:&w)ta E2$ .r".V IB M\9x$bs4St>҃Y +?_-TQ@X*B ʖAt8rX(=t= {"BA7o~ƺy"HY[,*Č.VAeQhuE6PӴ] I"̡pLe4_l = %eH-MHߕ X$0B3$,lrR&&zFY[EnUDdh|6:aP%<'ֻ/O'qgKKǏ%/:jg=\'*o+R6yJ/W~ɊEJAͫ/};8D#P2z \"p=#'O4{0, %̮_x}==>0[!3~^ pχf#+[ƣ}u7|ˡ=u=hW.u#w3+1L6h'2eo=sx1nz8W6xu{VuRpa!cMU,M_Op0΃4N gFF ?g G˃8<#u9?/osa߿7f?L1DP`+Vs+Vw{=J? 'i.vpM)V( OH~RYIdMn>`X>?y7­ᵺ]KvS`~]G^ .w!V+YmL dhk)1ҙGrn:d%.>piR[Q$,x$I -ܫCb'ZlX8ߏia<*0 =xE$Hiŗ'c:=d:FM/zөsTIPtb${5!;;"B60k=,>NLH_yc"/&Cqr͉2)~Zi4 2WBEaA3tDF+ $nF q2m[r7}T`»-xm@ZY>PrHy N۩p]<^8I[A`"+EǤQdm"5dfvVM6a :m91r04xLbb =y%J!C!4@MO?;`*d`N@Hц$tQ]_l٠@OH#PԪI-Nzw{h\Do熦 R W.Iap@Vd9;%qTqǵ_K'>e>!9%E15ӚJC3;oVoEd+Yaɱ_5Te[<49`Vҋg\+kb.;7(m0(+ (l&hQ<`cju֡EpN -]˲W/%HHhqϔgz,O~iw;?Ls ^RsN+h6] GFaȹهZx}::('ۏ?x e{]ʁ28}7!h3VvʳD2}&\D@"x*HO g7z}Qjj>]抩ӳ=;ևz nj]~x ֑Kly{,%z,AMbq13D9ufw9]9.s J-:!(lFOFdT$ln+qW`W4_^{GLYM)rC\"g,\Տ,G,=ǤqJ9(TBd=dZ@z}%B nӳp;WW9C-914E[OUyp zCgՍ6, G<&5p; zu`CLAqR!8rYJ29q TKt0g i;UYQR_.!Sg @NR##p\[(h *&LQXMus' 97dEJ][B/" S`6%i,BMۣJޯ'g#!Mnkɩ>x}7dI'}tqG{PՍRvzzTl"鸲R%`xR2=WT>g,Wʁמ&\FD!HYydsMJNg&vϸ4VQb""Tb:@"f ')[#DqQL _I_H$etM!im!)< }}zΖv!>q9{H|;im&&[d@CR $q߉Mn|Bv!B7!Eʣł.CJRzJ*)Hr%(X,JS2~|F;8nO<e-0s|1HeT  w ƀ. W2ph=11 ൯t*V=4^EgœUŞ\^ТT΄=Ǚ~p&΄VXe,ԓl)+A9ж؜=NHb"$tYZ! +&*QPLGHdBis("#wDƾ) ppWNf՝q\.|d黫[z$|XfgȍZԜUPuܠ U#qOvgdS|'K)Iꄦ 8Vk]-!b9 !? WuNH˧c%}3QHa,ʊ'12EZ)tPc6`Srb4}s^x1zq1޻蕍v0v@Z/s'w \J!yKvyraYyyPxs"琭)dPN!KC%k%<ѡ#WTIeKIvٶ? E;;R[]ɾx&, r_ðHH@N8CT3t8 h\fY!}1!j9 R&B/rdJʡ$36C=x3޷0cM_D,MII}2R#VP(eǔ, 0hB{nA94lD QANtƠFwńPRY}1uْ<3Τϫ>_ur3Q6}jGLl<\>59hL`ΘbQX4pTӳfSsGkyNܒSݘȂ #t}v3M3WDٓjԬq@$#R up%NZ|'z.>L,Eh Ua'cE%3C<_v?川ptR7˹eD& ٧cs;1-۷wu{8mnol`Vhˁk'NkUGӬh~w3Dٕ ]OO_[:X2.;.~8p3+Ƒft9zS{AoZ2bcKһMͨ@*,4`ź_.>.&zrٻ] [UVꦱ:MjԆ-ϟ/ǹzBei89/w1M2{hvgWL`a?}woןx? ۟~w? L)#M]ԅo߁[&֚e:5,/Eq7Zi~t6eXԟl#MgT,\r3վ1&Uxs2ƏHv3euX5ێ,n;+^aM[6/|αQs2nKuk6=nQn]?6Vda Lgl+482@zR87O0nuyX{.1/[aue9I4xRޡSޣ0rWW JJЬ]eKKgB6;i23g%)k_'xo.2p%9 )$JyE&HSkiJ nOZw_ݹ_2tlF,/g6Y1`wb<3RU)>x f"i9!ˬm&ܫ|B ŒNHf\G87mՌkhѫ&G&rb*"e]T}g$H#gL frgQ!II9/T1*( .gU,<8}1lgwv3A|y_#y(;69ƈ1 =2>$ #Kܔ-4i!)0./L Ġ<eQmuTfQ)4~ZA<AOzRK;4$PJۅ3he !M#J3WXb T)!X- 8zM`oMXjdEGO *\oJH k-KO.n"}p4zBOz#2ф[eG^Hk#x7"x\i6=`͗W;zr;hgAi@ڋ; Q O dISoM}8k4ԂpMZI(+ko:! _RD)JAF4@5ǰd@_Z9U,"1bA*Ș.[wGv,Ǘzȡ*tuc_uK< 4m_LWˌ28FAX* 4/s4KQRHP¼sOCh:eb!}{tA$GimºaoWS\x~qMw=㢾Lw<9)\ 7vH\07?:]65ӕ\.>/è`yE RP*WW5;ɻj~f!Yw 8R|*L]ݽ& }M|. 5cAH-dJVMrBa^q\襤 'V!k0߾f^ W`x͜>{MM;`r,I@6Ƞ#Y$sCvKBNC\3zU^ϻA1(UӍ'TlV֠P` oMRuQ*]_z#s @VUrR |mMGNٱ2lx*0*rpOb@-TDk >+SE)$Y$ƭSQżs6c^vJc=X"g98=CJD,9 UZ_Nf;^yck@`9FGK v>Ѵ<U x6$#g%[*M~@!͔zɣ+-FFs%Eˌ!(9MD) g-@ۭb:$A8fJ_u/< S")2$<;H䮫ȍVUR>y^o&FCcP̖\6Ɨdig@Շ#L`N2Rjsl@slB 7wv̈́So`Zw8G F_2oz&:ίrh4NCgbTkP*E{zz)Zk g^J ߏg>yLdLF.21d !LҚ{$"灁[9&ɷ> ǃ6jQ!Vyn''|@5qQ@{ }'\w7r l;T +Zd_z w|/}VM˭cCGGBG)m..;G0 Q4A5y`$z$2/KC"b1[0NkcBK&0W UtxBs,3 J0+dOU!T ) ҺYVSl3p!k N aa2&~Ht }U[oi/Ӂ_t~H(EA#Ac,k賬sky]?7,7>aԽ/_8?ߟ-%~#t߮3= :?tv<9LI?J||矞'-Jb~F/tEL3I__ƃكw utс.\4swqkItyX~XQbӹ5m׻/C g~__LV?zQ4}z6/;яc^:l4tXkqXsg;566.%%Jsh 'uBi/m"'y,Hk<`Jkd2fpHcUv/i> jXpꇞݨ|$?L-{}̗ZcxhiK=V&.Ld0!rSV}&0ǝu&:;\[K?$8LVos1ŸDFt42IT;Ͻ8g8;dz*oKfZ;3۷[_[|>ïhh4)x}zL.\\84ns-7`U<A4\N/%۝?B:y-)Rr/}j?RϏ4_ϟJbJU? u ̕tE2,%Zk|i!%0:2%i4"1K.‰RĜ5qhm((f<wUly W%Nnq/drᇿsb~[M(J#HJXeTKFQ ؘ:bܨ`a2%[(38s{!E1 _dr,&:Y0X;8-Ns9]M;a5W.(fLC G)H*3CU{6.xT2@^tɐ!2hMlpTdd\ j췇S.q(~<"7,N$fKev2G5&apYPj*n3]EEi 8)SpɏBIs#@So8Cm:j췈 R<.vQ6MӍIo]`Rih [ݽXtŦ"oWK;Rz/V}Ç/Fa0k: K2wβP) JZaP Øa,qMG6"ÑAY쓖Bnaӎ,7%}Ea]ow^ct\f \?t=T~+N1ݥ+@hȏ8CbП ݤz\-q͘>ͯz.}z_|{q𣏕7ܭkx;e/:d;s9IŠ ܻLk\2oo/e=77%`O_#M:ъm `lfw+l]A?s{gAyJ\"JzU\*ƏYݶeyg7K~:0rC24OG_>芮+KZbߌFwMKWx778~lٻ,kMjm?~W)^ gcY<[=4 ۆP(4'IkO+թ}i:gR >V9t8eQN9ܳPv>$ul畷]doR#į:t KT!:eF p'mhNdli%E#slak-Iws ?o ,"f99?ot6=G3=g*稪C8_Hi<\>VqNV)rBt`}*KrqӍءr r8@f6:s;g6 PzPEA'P3G4Mp^LFVYRz:}{Iؑ2JtSL{qRL^z< b b{Qb^jv@uۇ*83xtTЁ̲.:mBDJ ggqLTL! dcDg(UV!G cGbiL9S ;a#}*1YREGB#CDQeGPK8si.Hwr6nqXJȭ/ڢ| րf7T `nQ 4z$*QKyzRr!@w5 `Fx" !XS pc{KinyW6ntӳ|?x)%=oYJ:BPt򟋸\UfEJZ+J "0ʓ1WE\O\Uf^RJRp5cUR_F?:Bsҗ+(ϦF󆜄0.s;ЕHfWF)Cr͞.*ZxBt`WSEZ=xZV-V6v=UYOZmzg g(>903`ImH@Rٲ.#BXi,Czs6emH.\.ȍk7.t^OGWH"g'Q~?M뷲0U:vgd=pń_G~ 'w34TmyY-/eZ^VjyY-/k1]eKjtj5FW*:Ҳv'Pĵ'KGCߥ+Rrv^. ]es]eltU6FW}z2a]eKWitU6FW*]e /o?-O)i?4HHQVZ`vB'YaGrѡH3)#rD:2%尣[O-D9eَrS[@n$0$S&=HE2EO W;DgL  e w)q!XdĘz)UȔ5\h9Zr8nw逢mmN*\>R–84a ч/'{xrj׍o;#kZtCe995 9AQz8`(6XΎ /P9{H9Z7tta|='qo袙nK"A1J`A21 fIKUf?=u> Uj EקϦqw&c㙳Ix)LVr >$$nvFkHBfeQ~3 M_jizDa5ő#ۿ}(wJ؇Y΍yI2ah 7U n0e\]BRUeL2)x3g9FRjHƱX8$*0)R[Em{FGPN'̈́3efpQhЙ1QX a@KONm5s7 ]20㆖<0kގϰ~[fKa}Cۗ?Ž@MaI>om$!^cU.K Eqы#*CA&k;ViL |(K6ܮ>rO' 6= D$@()ۗ9s!H&(# g>4әw0K d➟AeX,gG.lO}Cyp5gv^&l'VU,\ P]:tri BtM3.X5/R*蒁L(f'Mf=yEXk<іA<12X>Krȧ DP Aс27sU 1X<f?fB~NRTm._|HCH09UB$X{$Xd4cVsY_Ozv09 7C)',,O6MYXjUO0ITϢ(Q1gA& $@bTPv9bx"E 5Ξz6>{`y( fsĘyʼn ȈE& 6G.O*R>I =p?00| F2jr "!pLRLn jFd'Mvh7CD׀QJ˅3he !M H&UJV<GfY+x%e`LX&EGht.b% $FH k-$vIyDxn5̂f;~(_3yʷ:zX}A|u D0l)j"]ҲlLN=nX/nƢ$aki2^Gܤt&!'5d0LgF쳥ݻR6~}[jR$S󝡨U 7 t{OIQ3_=}bcDҨ %%JF,D2ZՑkLw/^R)D)J[GaF P(C5^@+ jlTTIa֪D#cnظtv3;Wxq3s]K~Ɍ?@[p^ԜϾzdm%oYf0rN{R0f mLNj,jR&ʻdoc'XMoL 8כU&0h2sg.{,^6 vo^ʾ ojpo*OwG[Zg0暪uO5UÚhڻcxqngVKS-HΪ%'DF~y[0[0O 5Vsy$x@xMQCz#u 3!³ Bښge$&HĜ\vUw$wNDMiLF7|f}3<jWۻYn ̈vN7 <(ӎ׈T :o4hrHPP"-"&imɃZTo˃0? z і-̵ֻ{f>y;] ǥmfˆ<<;?W'Jвn hwc/.*87[Z}_,;mh&~X~~ay};09k*JD2(&ES(V,RD |RP&؄'ǴP}Fν;ћI%974PF'@ّ`\.1HJEe&zzM@J`.Q%|k&̪EV yjĩtn G}`(sS@b>f[:5|g'opE6DItI٘! t;=eFw FT մzb)=yYX}!@ڲ)0Q/UBc> g6H 80'z`|qmqY.^_/OkE jekuS֭DEaDR)eW"\ۨ`)Y\oKJoyɲ/tӓ+Glt 5(rT@ںM:Щ䙰1clΩ)bkT/:%UMt&9BasOP d+#vFt>5r,L:FڷM=1ء‘0Pr("&b))ù((hlImmbĜI3Ci؊Κ)G( 2s(2rި_W`MaVXDQN8!6t$t:KsBc 9 s$bɻ$pR,SȽaۘ<j1ƙ){CM>gVOex MV)ƈl:{T:&~Չqq:<&_g3)y,.Ƹ'\pq{C $ @ 4 JL4/k$cTnűa3x,xa;s,2kNo7 >uE)!:6|/rsޚ}sJ7f(㋘6/cVݲã;n28] LQKNԼmle,[5"V5l_ {uel+,9($ &zR&icH9/!ZTʊL?Blpy!Jh!4?{׭$_דn)X`v7_˒0.<$ёdؒ#~CHvWW1sZkkFlAwfU\pnx `|x~ $-2(woL_~+txGm$Iクj!-mE)>Q,5͊M6ϻ\HI@Ul:g*Aǔ5If"i6'1@ 5:-pb!e }2Vۚ6*dْA=+jUI)FE5p!6q ; R=Wvb|i&I\T -.my%[5:kԣk5u6zX(oWX|8,O!X0da[KI=1?1>omu2dXu 5y8y|mZv.hX܅ڼz^ج۾zw@-O]l9}:Ϡ~Gw__e0!fIDy)eL/aÝ40ݖOϞ /OWN -_6b,xc bK[2eQuIZt.dhiAEm<e./f/2Kt7ޅ=߆msbvp/U~zfk_xųqC-lV]ޯwCˢݥP=i[V֥6Ғ#8g7_ڳgϷz;K0|;K!ק} PAPKf#4-!Iӳ [-O\-~#e@h|ҲRNcˎ3kTMZzR0Hb8W&6co[1VoR-\%]3VO !ŸeB婻Ť_)ӡjb)}vI;/q]?ٖNN%Li{%h:?>sL藶ZAMDtdƒ얦$]$QZ(|mI$J& &\>2fg yX|M-fD$bB1:&˃cg;w,řbF9 mr SzH(NM6ʆ/O7x u~Y|O*"$}>g:Hlsx !?Aqn4V~ksw=bLM{P&$jPdJtc7=*][]_kj#Sc]hS!2ƉrTrjS}*Tg[qF5wBKZ7ay.XC@F{I.us­91:MqSsKѢkb$@.G޽?EJ͎|biKd+,fL9"{tɌayhR|DP.rb^rJƇ:-|DE0sdYGDm hcsm c;pN6#D㒳9w  ok KO`W tk%8xK1~g=D^7&-I]ZmlCź"mYnj(`ɘx͙cgP> pHxsޜ`GVL˝փϙRRm)ރz99cz/FKѻ)E$q$Mqu~!)a@z1i mdܜ`hR'-Z 2v[!-CvLHёzSPc1s#ex)X2<JCu4,,xl;v4Τz.M(V ,b X v**6(:۠-!x. 5CZc&  c̆6 D&{'9αɥ`-)ҼZPB]"w|Q!ՠ໴\Q?A 6N[QP K_ 9 & c46Ks ` 7B`@ +W\+Q r Q(v_ fUU l*31P(q 92j$XBY{MyGD),d~GoaH#Y+QC0:I]BFgU]%ѷT=2/7ь˙d eR" ܜ`d,9"Qy`P]tB@hւz4iZg ݙ@E{֋qicƬsDq"h(pl!N"/J^LG a xb͎[K8*4Xyy`4M`R@e Y&"2:@vTm^ ,\]UBN5~T}gyQ7Ny' m0|Y+ 1H"uMry,ȋY}L! %z.x WY;8L]:| $$hL)ڽ,9;k q[bH !3dЎa:A-D +)f)|8b6 x|fW<%![tHe,I(ZW&B{5 M`IXb:mT!lnr5f0װa>y;8~\~*? cփ`^519RdTBgW݈ݹ.c|[~#AgWb`ewY\^>WB / 5G< g`x ~4F50w{Z㍻,V~2F8h7 >j0v A`pװȍ>*pn}7>TpLYU:[%&B{V:N@WͺoilbѲdtt\ n 7]anQσk/d P){SoE$eVl>솎'[JvJgۮ]<'%x20g<151Q,YJA(+M)Rr6s칓brwdžp*VZ2C+GrL HtTIhJ4e控 7G-4H_J5H,ML, *4i٨6Td!u Z$fB _Fv5kaSa,$K\H@3c OBl|Unjϒ?߄G\Xz:)-UlRN6`xI_í d9VCtϹU:\(IzсơvG{|H@>A&%%䂜[[܍?3'qlo-IQ (13K(&^<]ZN7|uw͕G]X5)okĆVcs>? .l#FͶ}W_#NO4Q9j8'~F0&&Jæsg>ʌj3;+_ԵM9}s{r00 q9sJveYȠTg͵K Ҏc-'dHl#o>L{ԿquRy ,b5i`^Ŭ'b՘lǙd6oĐըu iF8Id1't0iqSS 4Qվ_@?|ݻw-럿XYx~V7g䗢YZ~]~?_;.it] =)uyYrcA~z۾;Bkho14|{ [,<4"DoD#ۭ'E[4ydWl/ĺ?5cDGBDhH"h$.֕h"JKT7$i5q*"C $j? }/tذ.E=a!;t@McSPOh)O(]λmu%bow7_wC|@sm&p K̀2%'Zyg$Y.ڮա4ڟ0w+;V&F |R/"8~q$7 4N"-Sʗ4[;>+R5+RfzLTe/ʼnARWE~CǵI|DeYrU9QVw4(Ro8x/ ,Ӂ{KID$>)F!¹X5ӑ>F>VV*yFk)'VQi[SÃrB>#PG>m:{+vPॽXMplDAT9TUhA:ODb1F4p.kG.={Gw %KmMB5 [+r{$07Whjfj$۞~M/V }LyB>1!rzji4ki84ɀəS w$G-'&h٣?Xc\|:B3wx:bv2+h6-; #:Z@f!;t j`e`Ǘ¹RG?zͨʴsC|/搋<u]FЮ2Kݜާ\ jd$zPKJ)DuIs[QJ.y{az;&{/4Ė&*( o.ƙaߢ8&J-˿j/֎Q\C9K̗mCT9Ԟx™ᖡ?0BpB P>fgyhNZ*e퇋z)i|נFRd#7ј)kPw^Hፖk!r*sї}{{qCESg}γF)31h x1[}a4g/ 썽}c{_DNegI2Vq֑`uN(9뽡BsLe4HQ"wFv@tq[fon/s4dO B2 JCbʹ /Ā3ňF2hJS -9 ~BP#x&Y@cAIC(YAeԩcJ*, cTަ\+&U["+%CӋLPmGOS]G tjɹ# 7 pt2#B*q`^ 7zմ³}mRw2F BüwdORB<3ph3꫞qqB1RIQ% D;) 36++g%9ox(H7r 8L]n͝fuqWg&[m"xNw,L xSz юu sCϫmKp1]v*>|5ʀDkun'ev[:6a?ẕuK&n~1l3k #y}_h_j_1݊g|5R s^QĔ '0LB{-KH2Jj')b&؋Xӽ&p g x;swIojU 9: TH@R@IL&#Q+ZzL>/*Roq`udoi*=h:K|5G)n{Hx {s[stuמ'U|{@xI ZZS묤)5")‹KQS{VdLGMrD&u9рGzUEXHD,9H7<ᘮa.fK_iwCa糳%ϡ_(Wx p0A#R7'6$D NqpNhL9/)WV~@:䴳m xBo506uIwmi#r'w7uRwO7) %);Njm ,Q|s\eٚ3( D0BA Q#18G:+h \zy6Ƨ'޳O|Ɵ]Pѧ+gM(Ic$V$4F}# 6NX87*Xf QƖ(3(^HQhSL ^C4FɷchRf`Ì͌GøbV~=k vj.ՂybI&aZD:>KC{R:Y9.Uf-Գ Rȋ2r*I`$#" T'^n f>E~!lה}ˈaF=#fKev2G5&ipYP{nL$AADj4 oH*mlC@Ì-L՞xqLn\gcV//ʆyQSрS"rF# #7NG1T >m(1+!R#7vbt0Y >Hяe.ԻҘfP%ep)Ʊ@Xη:{|x0.ys=6SS\ѡIƟ,J8Wl0ϋvua( LҔEL3Iʷ | 8=8x7ԑ}g³W @:PoE {":ՔHKvb=keZl!F*c@olw\gOE`vj]|g3LoɌ|/ on0 TDL7kEeYvZ6\&^Rg9)&6d.r_ O4WiOeĥ4 Vڈs(dcu*9,4z쥋}TvQ_Co㳄]߳[ߺ);OB.W%>`ߐKtmEru5 ]ۚJnl>ռ|wqEw^i{(8@ 7xhzUW̃l2/.\சUx,pr'Ze#M%*P_7|x5o:wGYAdJU_ wJhg+e<["V4lSɂCX!BYEMb:P 9VaF#E팕zs, WZ.H=iNna\N?OrjymO|*S9FGKEhXhƲv҆$z̳`P(CoŔ,JLhx*阄Ġ-Mǟ4gˉM#EH);<:Iɍ`K ^36{d48r"J2R٦)\+JSP EtH)>p.pn)E1!SdXϝV貊8o.{0aFmcT21e?cg@'Kq V)cEclBɽ1IO-ʚuj=1Oi:HE,KyR!llz? JTcC|BWy P%##y'fgyl'k>\pUML3N?yWuɂ]dcB5ǔwN"r8Vҗ/୍a1@ȨU"蛮MKBjSJ# '봥 ׬*L6Wo~Ax?[^+]E_:::RX_]LL*Gp=T HRՕ@8VHX^ DH"Rȼ$2/&ܓfqimLHr٬+*&ޗ߳? z%K#ImrJ Τ3)-BSN %>SNmI9m~ZÑN{5?_%UW9gC eUi"'Rݳg=Ŗ瞚Eb"t)d$! γVh* Iyb /Gbira˙fKLe'lX;>+t8}3[mX1{}$ZӯV{?& e"zoъ&2`lfWମZc8 I?#N{wfyrp;_ssߣ/.4abZ\w~աtۺ"gri/n2$u9ާet]y7~7BLuqk~_ia84\S`(#Z1+xZU-X\ق+ÿ \*}YmXXy x֑Չ :-T|rh^:BU*MPz? .=ꪲH^A`* U3 oR] 󂫙Gr$e1BthLxFt\A`i-cZ-+Yi׿} L ~5JK/N~b/6x O/ޒY߭lRb-%&=,`aѻ8rXnp8nXnh=NXn(-oWX!,a{w=+KsS9+\S fg&`J^09 ?~|j0MgFjMzgT /<V+xEa;%(Tw x\XJ)҄YYZ("i!"Yg"ǮUA tUP*".ᝡmW*z*(% +ºtU+*p_wm ZNW%)ҕt U;3wUn(Jk;DWJ*p ]F th-vKȕ*ՙ+BM4z:BmvG]\:2XЊO7;E< (]/~F7?ݔ? aԒ<كcVtA i5EK%;CWSDwB ̶ J)z:A\Zѕ ]/z]NWS+e軤vfΎZm۞N`Lt G`W R`OW'HWZHź"ZLؙ+B J;'IWF!U^2OY^9{э_78yY2 *4V`!V$^i@ZprYxt}Ư9}vLڻi *K ?۟WKy[E2[< G7R{<@K&!&c6 2d̉nofhv5}[? @G+*aFFZ{Sgģ۵۫gv~\Qۿ:M>%/J$PSi)ǴDݴWgFw]&Ȇمk;Ju޼Bbp矉ٻ6$W{{Ȑ~0wm&닼X `S2Rr~3|#H&cOtWTW}U] d^d_.G,z.-/[0ro DcХ݅+\Jliһ\ֽ~] AT 6.{u@7M~*2t۠.0Oʔ/ xfsb٧"%C/QDzvr7G,yB^Ӫ*6U~Tc0" ݡ׬rp3A0ɵ r l3f4jtHbaJTļ{ gQ]">,9ث~܊VO0ջsrR=."7|mx|"ꥳ?xɲa+8- {?}mQ\OO8pZ0+(q"ѫPYYcU]rr> ̢A[`^sFp0 ߧnW%*=xsHBS&߂ݸIO9n :Z~5vطP}}}; 7H:W߯t\_NWl݁FsCٻ_a >3Ar,>{1^fmy?MgAh:VMr ]L&b";=-M'o8L Mjkě^#[}]>%lvMd`ÊS͕cMSTN,{iC=_ꃀ>`rlj\ʡtӋ2U6&#ۂާe1F^[зs=mE~>L*Vyj)aNQ+{&#U'ѷ˩Cr2)Pʍ_:x92J.`;X+\$J[h$([h|MCou+(bա[nQwԍP~>K*A. 0vT)$y7 Kät`힪sA^fɤ]A5`:ƋZkw(ʊ:p6WPVoD 5 > &ӯh32xZ8gpSRVU[q"v5E+nq` Y(Frpt-V˃s b;0"j2#wN,WY½3H[/L%:=Sq,z̝vX-"rofU)a ^ز[93|Hΰz3<}~֘]+{YeNʎoڳ_Krݞ:Y:sstɽ h:CHIkZ@ &uD94-hk#&zGGM% ⭯>KETcP0v32ő ?9HysQ bTvRui3_ͬܙ- iQdxW_~}؅' y*_|&lB>"i/`jP3IPd@U4;+MBBtcS}Zz!_R Ɋ)a{ڃ')2T;qdX'Y)tPi]iL8^eC/x+od{`5vqַ6뗁C@&}5YסEÜ/Em{v}{?P1^jC [ ICӸ?{7w7rY·W.ϻnh ~6k{)sXC(j)XTcءCyHi2 &ZQ ݿP0&LF]@IT FԝphLE\ )6ō9*V^t\&ўگ(kkeE;FY)jThA[ą\{~R'|{q{3m&X/cψ4{¸׼Nl֢HQ9bIH9EP/cƈ G!%'݉|Hה2tʢÊ+fJYf:`PFl织Hse]&>Ik92.%L_t { l4exD[e1nzQwKVwS7N Sx!Er6X[!*gv:A;6afN`@ tFs.)pe7r*: \Ep8F ƭ>VcPT`,cϮN[ 7TjQ}ګPt(K%Պ":nJ[qҡqu vG\¨J yq4Sh Z~KV/كWl 14゘+_AY#ɊatR o1P`5PcOo骩M}nVEŰO0i@>W]^ 9M6*F^n: >]jzZl4aF] ԍ8s@}|771Q_ӷ/F`\}&& 2-ܜ[jh)=˾~~Yҥ5P O!6 )8:W:˴R͚>,X1ǰ~/G57*K׭r=]m7T+_EΪzT /եkU-*vvѽw4Hqb߹p$+*@c q+I<1 O%W,7d>Ev!;IR^wWEJJ{Hٚ1hK_mUW{;3fu}lzmS -^}yUX,ZNsPA+(KD肊d0@1Ҁ8u;;cH? a ۱;ǻk-30s^'=IviX{^ޜ)p <nVpayۘ[nL\'V4/1X&׀&7*'V&v!YW{ g DP2PNS!;r;$ʉl pA a Dm@ﴱwI\2^(hۊB8X; )e[zҚ DP) Út_?="gjCém,n;*`?My1rM|u(Q=;CsHd!EB#t6kr/B\6cH={2Z 7ɻh|,EI`҂`M ^هz"JNdeۖ)Q!YIzp ʓsAh*ƃ9o-\jgQ/հAP9cb)`Tbh"Ҥ lT ?$w{P`YǠ\.l&nt-c5_EF$7@fP+ q'H;^&b}Ҳ + !c0@U)!-`8Jm%Ibzp.zdqEGGh:dW|I" d1Ҕ#mٳA0m$&30g|Doyaبe53^H7ւXs8 RoJxiX6h+0֥hG3 va&l:?ǧj@,˜G*1mldʲ 4H.(]O1{!B)ǫ:e6ci?3CƬv;=fԌk6N3wy^({m+_ܛz־B!ET#%MxcS hd6v$1FIIDY4ſ% ![ R6ebb=6vnCj qu=8(1WS̹f'=iiփ`-D*tH *>.д$DHv{AaDՉ?&H$rCC"`( 1\TZI(4\0<[eeA8òERSHJk(`5š?$=c3r ]Z7pjFvHt $cW~vPP#۝} C1^`Qwn\g~:w}dIEge ʐ~ASu'XP5kTghO^V_:PE-'zs> t۶:JSbl -=fEg4:XAQς-k9;&9Ř^@#=v7o\)wb~OKwP܇Vu`2Ռk5YJCZR1⒈¶lLA1riC)} l' E60Jdud*LQc nFv g}zͺ\A@ E: K;ຐeDY S (Yu -D@IA,Ie@NWD@Vr(M&ԌZ)uDWS DAj)" @RspEe,u m_l Gc@d 6]'b_??ܧ,K 5GQE!6!]&V&H)IJt).lNZր5nd-STI&d%qJ`GZ4>NjEo!4qzyPƷ͇|zٷJZΦ6d_`j[]-ll`Ip:G]Nqi t d2*#2y[3e(<EYplL4?:3ǑX?Xi)z*`0.یOĬFn\LJjuc8MG=PhT((QL0Rcp>wlQ7\q)w]u$ 89>Ѻ93~ݜpB5XVV f%!8-$. ꘇ4iC\ ]9+v]-S/SWDn$m6I?;mϞ1/};ݿGos<2jWGvR[&^Tn]vٍpJ4i·OJGnCl8yCw ?is &NuPT=t'B#ϭ.U٪?%?(2(%&L1xi J A.G\r)bW 1v!Ȕ({adD}my:`#.ZCBŬ)MN5{H3vSGMh*ףr#ZXIɀF)@Am*8DR)5C@ 0QkC621,9n3rvܘ')5^XW'-@j PC2:KVTk߫xb.M~5 ҄C#V데D4c$#T /|I ek6g%(ʱ,]`*aQyg]I E sJ8ʥTR2HpS2za ѠccSJ5}@~kOzo[׳uhm i;PCom6Fz цh:Z#W̋b8K"9 !4WJ)K^ ?_Ϳ|!/^/N{읽w_OX}WK~ݷEm'=؏Ju?^YRbcY>"g_CG/ UBu'viHm1[?v^a{ȫeh|oF z۰vWbz~}5h>ŝZ Jo/~huFhTxM,g#b"l:yvy:Z%+_Tkѳ}4㌓_W~)c8?ㅚ잎Qd?&cZv*#xtJȱT״^fd I¬ދ >n|O }|D&|5;>; iLUbOo^iB;v!'XrRn?$y69k/,5p7! ξKdAAdWdY E-r.y lLrl A=гa;H"th00MdWc,yyyBt2V3(/֫_9YC\ޘw(8Nny/ތ 5_!U>gi<{n3VTo"tC!sߏUcX֢L{Zt-_nޏQF|Lrߛ¹nl'G/KkH[9S`pO߈n{)\ql3tYqǷ411tl4G+Fm4,bӧ|{ 8ٻFr$Wz6o hC h<-[H7le%SVڕ V2`f_q35W x}ZԤ:;EBՉ%&ω/_:zÿXFA/|T9Sԋ^Z|< 8dAxkjmR mTZۘtkaơ)WlR"Z  ]TX )zqI[͙3%d㫫9KPwcA*{0uCĖ{!ikCMEŨ}ڒ`{.9'I)I$#|P"8,%$4~U x0ҡ轀:9~zEqW!^k'^cT-{ )؋r(g R\b7s6XKpDTĢ`!@"#Ǖu|.QU>Ut}S$-.^ee&O?ŀ.^\\~8_qOB9ȬTuMeS DtHm ֠a5ZQiUզf2گAKKeJ"ց٭r\̡hf1}7ߨGm, >jP4!N\}|3u: :{sR"KMaDŽ`aP&Ϻ)H#("2r9L9aԯR݆Oec(G8jĭבu:Kfc<:>ŊLj ǐb`"5Ո<;K75y3XN95AJ,0^T`4ֈ٭?$~zq9f\r^ԍq< kؼU&ۂE?%won7|)ΰ?t1IZ89~ysv̛~ޜBq,2[+JBD`xiK—c[9n;/ ]9+v]S/SDn%m~>Q/m ?2/6*_Gwۧ9nv'?~Bn{_|?xf{̼6r;?ԷnOi_> 2e)J&V&~(82HqvK\"geˋ4v_?E-&Z-,B(ʵtQ:C"E rmCä-(*ID 2!$"֖\&eg0Yrn3rܘz$ERceH'-&!Y63 du,%4",][ 68V*tB]XYq,Xh=`Pb*C0HY@挱E9NU]m5Z!Xb!ȜR,ޣ) Q)lHl}$ZDR^YQ$c'-Fw YʆMhm nJv䥲AhZm}:Q<ڲy~}꽶4|~qr3-ڒOIFoTW:SQ47$1F e3HHv4r)a :' ,"8*"F̷ fm~-*4nGZk..o.r&4\j2?EGM8<rǦӫ_N?M/?O7aWvOp?sdyZ/F.YŬq@5xo;¬jm6#ڂ8R&H4uHg}zM6Kz X+{I6%mS;121I2] Cguu#j7OB6= toc@f=>x|ln ?BA=[F2>̿/D{^{MrJRPox"`͟n ?sٗ /͌G/|>Oq8ry3#jvyzy6Yg6Lt~ۏz:wq[yBNQTJ; HnRvرљDBN s}'<#E htl826KEC ѧV'M׳h!%cHjŖdyWMKg@u=f@}Yr ,D:V ".GSш33a8g€p& gZaA$ֱPC,@bS :f)%i._-X YBm%IҦ$PpEҶ&2$EäX%(;]ZTlEnCBv[jV(Yl, QKCGIPTۡCsZW fī'«{Z͓3"N&dO\ͬEL^MQ#BT4]0,SHG 9>K cf 2k) 9VcW`NBYjHFOEKLF*8֌"hccc6ݿo C 3?46U=ykiֶ9@ .RSYe@ɠker6*i5V%j+>kojp/pz:[j~tޝI=]~{ψ]#zmPҖ]]/q_O>_"އ^G1Q =7M?Lf׋ M2`GvMV}#D^[ sI]]2!vE ޚ.h)(/4E7)Bƴ1eg*#>|n0#< N{(ys}~cGOqgLY;L $Nyr9Fg IROF:庡UV6v!9%|ٙ,P2*CW^#==VІ)![d Kvh|q&h& ]H|R2l5 mzɁHJj : ) !R^ RzMt^KRFٽZuJ&پսVTdpg^>=Q5V4ʊReRh6`UbI{'$ p*g%ޞrv2: ,7 |)!sQt|mQ3SAP &'r6(J^ijH$`j?/T1*( .%U,<84tc9kFΞro@zFufQл,:ƈ1ʼn}f|HY0*q1 [4i<X]E3'`;1( kLYc3`L|r(2 7L#j'~ Vu?_8pwٟXETl"d) g14!Ra$RBZQq4QszxRă#P]ǯ~O97J6mw#}tQ &㋎0TMD& k-KOڱeanfՀH~_39TLc[UC<,+>{ⅴ\0J`}㜱  H˼1G;c]2h!Jm,{WÚU}0!r}a] HhN@#x}D^ٓv?-R5$v tP9)OlcVؙ앵/ҨP !G`eSe6K(m岝ȎukᶊmI t{wH>>"ggG;Nkd*RĢ(HKe&\gx׍/<'Ft21vR!.q΀ kxX[oe#H+YIvwZϦt&җxz:qѰXTJ<$M馮:@7 J#qqֲ=w\3w%i5˯r IVT3B9H&$rX9z"ǒRdREPЈWq4Ujʺ8ûipH_)g٤#k=x/H>P\QT,[zsdČFHǔs8%sB1 m#l="C[IY ( (<p|L`:hMtN8ljA Sk<aCsG+wd~aZ^IF NGspGX"-!Xl22 MdQ`0;!~s;G+Ќo8f= 3-0{F7E.vz{bYFK>빉1 Crz_pdrhr&^ZЌ^4n6\Qe߇!:Hu5$AH)0TFq \|TxGTQ|a챆lSn;diIiO¿R|3uht돳Ma)$%,Mq?v?Rϣ*H xiճmf`V*$3i )L7Ϲ ftnau)%|vǹL6j: !զs9ߪ45 3IehYCQ-wh6679'/ɬ>2InO`9enu?{/RZ oci&jZvɁZ3,x0=[Xn9bldzs3Z&CDJcME5s$Z e9V*zXT!˱td0//L&]QW<]h0 3} ve:@ FNh4"jΌƴx٫Oxc˺˩s;h=Vq1Ebs63%iy\"I;UN]["FuF4&EbMX 5?~dzuV=YiEKKg橆[AzùRXB$ I9Z;㽡.K@iy˾ (W`IlB Oa*5(K)µf8z<]-9f[>0fHnؖu].A<98yARQBp6գ4X њẃ*x~q˔vTuF9kļ#R"%1dNF Ήa1"tZI%H@UAP$@2qk=TK= :~^>nIHDFpʝ‡̲A"+XP)R,?۟J5m"H4 t GŠ/pT`agL Si1cDH(B NW9 DyMO!AA-MuV\1nʂ0(g5 ` ʨz?9Ȳ:}N|ظ!̊߫_fl:[?f)+z-aHtʼn|T9.~87C{9Iέ*vxW1…|uATTЏG}v*X+pґ "aHwgRP4\Uw:膁ԟVÀ"p8JҁV+ r[ߝgóCa8+ɰXG_XSF嶽ġb[Q4T+2E9m{31NY?('6 \8QV@QenQ&?7fT.h [Qw @1kje^٥Ñb8r@|xo0|jƑ_=E0yYfYFUG|c/uͣ~$Fm@]:Ma.!i`hRחPb@UqSS<[YX}\ӏ?1y?黏'߽?=ho`fU"&ؙoNnN-\ZJ?\~}7SCYx:F4`u_)(*ckZ˂;,˝/VƛMd謳`qu)7qy]q]Tɖ. iʋF/ ōkă|$O1+nAȽRb,Z!N4by )ˣQ#zQPr)`#=a\:Os@`#`0p ! j BEKgZc  krWAFxb AFtKŶއ,!l֤S y6ET7:ԍN ~(u {hc֍~u)#DW\}0pz#pmW!cn,[M0wHaBz*uÆ ,i ϶]]V#[B$y|L'h i*޷[Ὧ#߁n4㈪4{oi&*x4_əKz*U"p ;\%*5ixL+ v0pȥP*Q!Q)k WRkDR&4L[‡%G)AS6ӻ,N7)[޻Ȼ0|NwnXHyj~ŷokIܥ{)>9ިZ;믕-+mO>IŞ C#kZDsЈy-^Zd/m3Xqz(~[lӣ:ۣ-t•`e}$Qo̸_R9}%:/.0EyUOo@;|;kgrTlJ(.f""06T]*|Ćh,&STfP.gCQj)J ^dhb_gS*b6OMi7f` s#)>Ȏ&B<:n@HgK!s5-,>0JXPFKM n5MyR4@N3"&YNbxJo 7"Ro'w',뗹Z F[Όfq8h#D(``k9!d >x,eYƒ|Mm6_S&h5o|Mm|Mm6_Skj5|Mm6_Skj5|MmNkj50_;Ea;`fdkL!:A{C 7L`fymCt֘{ DgKZ^Eh`%NJvT d!F!H#gaYj#"ZHΔt%J!ɀ8KNPXafWl8ۙbtrZV h|9d.qP;۾c^J[so}_vbi0 aC*lY5cWG&ku6^οΜ_fPZD004hC6, :hM>Gc+.1%԰^y(oNIwHAΦ6&:6WlUz0(;m,6f6޾y%[>. gvI3 !CqCdw% ^[mx; 7}-RVw=ˆ.|JϠiks+69e\+&ur4x4o#e/`G7A0I@xwQ\7QfsHj"QfņcԻPˇzQ:%l  f:Cy/mֿE狤 fy_ 9ͮ ޑdm㨜'љh3(I[!FxL:Ku@#Q!A -">Jaijte,Ԟ0juVd#m?%'8fnı؁-6wuV ^fkH Wfr,A+JN XFBbRh+23/cO>:G ybd<(zzA;Kʀ+p wG"ȓ ,r=WFO^3h'DZc'x#hdHqaz/'YEQZYSݕ  mGҖo 6#Y/mQ_Igaa|B5J+ WWIDP1L%LGß<$XCwo>&? tr=/o0~xOR[ F7SVEV8^\z=]x 6@Wf>!;πv@}gY#DI8iY!E+>,N@ܖu_iQ=mql>p;R4mI)c`fH.<\[ #jͷ{fٖ>9Lp3޵FmyHa<ὋԶ.0?7qKg&mgYE6GGÆ-u-;e&V<nǾ&M! ?re\74εV$4-1J.3$rCnSkQdyvJ^6H eS訩q8k@EdLτ/B $و6 h[hU.*- KC$Ec'F(ɢZ>+6vߢx{d{N\|ƹfwGL-[h"iYh[BQ2f+mN<E wPSkJdrVTXA AHQ9R`MRhhHc<[4&|EBj] M&ӅIT6>P\Tg0LH 3:9,蒌T;oV:QW`TЯFcY!qpR&X2 -X\}diF!J:vB:T`3е}5VR߰ﭿ:E ,[f! j&3 ]lzQAYfo/7~g76דlnpvCKAE}sh:h%_sԋzdpYC@^ gտݲNۍ5NEπ؈ihf^w`w󐗏友&s\klha^oCAۅb8^`W7ev:5o0w 9$}.=a Pp5^5@\.o_=ӄڣRY0Txt6TR^S [uTBYy0JHa$hNYyJ**ȗ"_$DT< {漃tbD'y!겎^@sJ>K,h@SomTDɨh:›ņC*'Texڐ=rNX{U/6$\̣M[x^ RV 0t.1JƜ RGcGÛyYЊ;gb4(Q҉`UŌe&sxV SC(~)J_D؜X v)u*RR0m%6(sh){t`KInDmT%20rb_c1504ᩴ{lDOQ379oCN>,(+`O}^HA )N}]U }`@O>o]sew RH(qRFJx&*O$܊@2= BkRs'丫 9s04j- +r^V@Z;NPTc"& *"59]_YN R lLҔٺ)[osXe[f婞.R4zcQ&=28FM$^ƅB<$h# 7YRe-p 8>rEdPܡR%Qi?Rٮkygd>Y=!dMQ޲|%R#eS4nT'j\2fկ:)[jBU:X_(teCTN'=HzK4(%Z%HAXpLxVSσx\L62dG~JE%>5CU&fZ>罥Behu.6xXOǾH #"Gč9 . > "^G\ d.IOm65HHE{G-mc㭔<2щE5'8~ME`>pWl8U{٢z)u}qEb$pU4>i@pEJ/4,#I:4Q :4\Bg1pTt싇0$cSf^#~pɕƍz{?cSG~ DI+kr@LR&H{7/4nNqs7G4!@\(TVJ38cLg<%2^之ci7!fR׺]evK*{8 7m6 WP`n\&B^9^w\vQg=֛9mtfOY}vMYwfݡ畖!u9aλN!Գ|KmC˞zC"h.hkRۼYnW1k˭_6{dq.%BIhY*+m"WYUYSfBf!Y Y7\ohp?_nǵЊ? [.Ҙ!S:yO:su$r,d1ybI^ƃ9K=c7oCQtizͻj<w!-z[c?b='^i|xaϣѴ ̒-k5mKZQ~mϴ^iYRqd}EsjV4%/7rUGz9}mMogoȏ|_uᬽ6x~"x@M>x68k \ se?LIPM8\h)Nu!.p/>_RHT uͣog-Z?!`_JͳB޵+ٿ".fQ|bޙ"_+$;,bvێMYRA`*S,o*1MF?q7~R ) :{릙s1<Ҝ. mav톷rjkO{ B m`::bv՗V@{vAF@ u', ,Yh }en'\)B6`w9)0hk8?@"0O"4|"wVm-;H_ꟓv޳U }ן˟R_0qsXR۶vx ?! ᄚҀؓi[k4뭣؛Qz94zihߓ'7A>_ETH\u!6u(.*2=Տ ܱ}ՏLpb][giSQ# IG 'AEB*q;tkҙ"4ۉTEySY9gu}(C 4n ^c`bjua~"Y,jtLCY™WU  h1rlyIR`ZC_J "1(}=16KA<=wwGQ͵&>2%!)^t)Yb&S )"JB Bw& EKѨZtp|Lz̋kDu=2߿Vur5wغIr-?Tb bq ӫu Hqif=Z-R, r`mrN0}P-m)O8nT 6M RKݰa͌O%3@8UsH5l M?;ۭd6qdfЏ!+H̍0k!,Hg^`xCBpjA}!P t3PXQM<8Ta+'/:W7ߓ1dTHqU҂..bEM2pM.7>aې?6VohCGPzrc=EV E6 8 &W ɦE!cR!VXNګlr׳oI5[m@JpI͔Kv!w A1| vb>(b>(b>(b',slkMl2u/2@1у/V~OfนRGl1[وguD%%4bEsBSNAG ހ%nJ/^K[MM^Y9E%:2Jјl7d{.t9[oު#9Otm5^=r}qC~L JZ Xw1[΋ ʧ,ȳM}vR[sRrTO .*,#Z/ z[@7#S!1y=%WwY%%A(4M3DI+%I)׹N;C)\[J/lr#tXIm]L•9褭Kj6`VZ87Vp8r[z7Am鏞`?;\n9DD$otNU1Z%~Fx?008R ξ4/"En 2cmm Vhk:3҈<jC> Ǖ٧1Kٜ,P9'|x v$8015$m9 /ڵؒJ HqD ,17f}nIaKoIF3RQXH.g# w Di%uo=N ۖHVz5ۗ]9u;OGz`?'V]}zfu!WRU!C$(U1Znd-]箫2PsOdJMj66A N*[5D*Yd8R!'<6^n6aG׍]j`<>E(8D{5U!JP 6-aZ9fgjſɒP&4"ҧY n5(*cB⑥ƙ.lU٥D)o$Skfy8"~QG1:YSjx;AQ% N5&èQ(dRznx,u<5P?-!5sU_U?obox&t|#>t^GZbL)oXXK=ICs1w電iHw=9{;MHA% omjLo>>W cb|̣8":-:S~s^F鍊EAz/ӤձZckIdz(o6ږ邭vGrd; !-+4{ije"K+͜qhœ5@]dM0HܳgEJey2Pm:yNkL,I{ 8nW1&UɧKB+UkݶfI;_# }ן˟k`0qsXmCG;ԣ6PokxOi]>e+%[]RƽI#їjO~ϣt<D{4 >4VVBZr_wn/҅kum(1]4AT%^d,tQ!iߴ^K8M  }69Mp5ɠQttr2T)DTkVl'B~W]wƜUI^frJ>wo2% >h} hAo1K|}^XgRH̠J"s؄.33ݑL{t8tG35׎'qJ%i":ţԘM`C[Oڶ_ ~fJc~;s~r?cs*ON'`W3Ww_-Oyz/aB^q:+2%ǾDg׽jѮFiyA~эpE#29Qwa.9̫W^`=P_2EHit.]Kn]t7ys[~^s:5JS^Vu|{6vun/ j}tBY\&e}K_0z_GijKgM'Ujg#"qyz5 5ho?﫵z*YCU3A=. ZԬi:IJӁAH4*]nG~>k}ۉM>Q ,b}HCyyS=3(#QH&`[4{ F/8~)v/u/HƙWnRKޯKR/Ք"_p;{ws:=ZʄvAa'p: hTsc%bD<Vf*aWѥ"A&fr@ 9 mljhX|c  T,\eEg+p: ;T+:`Y&IrR+)&6 ]>q*Nը43ƌY<࿅iEl1qN̹}0iT_s$G?^N +`!T7C#1zaH0J*G >> ULx4-t~=|0kr~Tu6ɺQ*Pu75ON040n+93ps(.ҏُ׃ܥWd/<Ѵjs:󔨮b`rkAo0lh:C7: W79q[:םB 9kK%lxN֯ō,wGFHqsЯFB8#JDj8҈xQ#zQP҇Fzjg \K <1=nWXQPvX*D`_:sl4؀hp8p3Ft0ؕFnoun_vF6=cE4tgZ^RXCYc+:utVr(lێLFt4 XVIʅ FIGE5YuNZ鄒1&eXV@i6(҄H\#$ ,MmHkչdF,{_1ԖLXʷ0[eM&)>`liJvF‡X. ^"*@@E)j%1.ǰ|1Yx+qJ|p!0P+̐TJV̻ 0oT JB#DHD"# FT8G"W BJMYSvk'xWܰcD+w /=# uGBBK-\+M]W@Nf1A(1kTyl[x{# o>}CRT'w4C_&jucF#uk {F3ߵ2t9Ĩfp!!|\K{YFr֠yVi4DhAkB%ظaRZ7.47Z8+s/g&eMiTe<ޭY+Yl{]]3 ekPFz"oW( QjBY锧2q=d@ "£ B h@+J;$Yϙ;+(. Elf(AZ*)P^Fi#.!!xiJ3&N ݭhqw7?"xڭطO <qAl;I4 {X!vZie$MZ25| 'S*"ϗNKs/(9/W]Yx-~2T[%%)$˛FU-'Weʼn @Tu2\T#CNQ't|qQje ޜ"=k`&tr7>jpb&Ɠn0 MWŚYljzE>6޻ Nlpؔf^{}yC S'yL` i,3tp r?L*fnYaX\|>iŇV#ГӗFK>-4U> &K٦{9,h4;&I̵EL|[h}l67\L*Q[mV $7YpUgohV 1q^ʋ.9:4Ww 43.;ͺ&DDހ۱c1Ԥ0߽\GNw߲!T7J5`k9,!DIrTLעsKl@1_-8. g1xFa&Owe`N.+a8ņ31-5z]xcäIy&Hķiڠ^XÙYj f|;ڴ.\TR]8-qY@)>չR{ɏ.#,EyǓVH=: gr#'\^oG-K,s;*By0((ei nBΊ-9?APyQE)|PS !<[%Uӛm\³&U`Cv:Ȫcۇ('lً=.d{d޿JO,s2hPp2j|9.OJ_zJt  \%r)J.QJT WW S-Ճ?9 jrQb0AY~;(@H 7gg(9cQE?K>*!VI/oVSS8d&OP2\0f+@Bo~_~~ 2Jj=|``lO'WΟ7%p,Y(Lw vSԖ$)AN1IZB>%w.CB퓦KFSrb%jw߰LTWqʘb{W@0lo*~ƭ*QԮP)n++AVxJP\mUb $Hp•kz8>)<_EAva&ybKxXI O5u뎕o)~;{0\gJNLm9Y4ڧ(lD\e# #Dv}#JTnD_Fİ~k3p& {7??Ȯpv98xpl.ag|!b n9-GW#x|1_R>)$PHERwh_`_t^IBJ-8sKQ$aYSSoy}ͨz7>⒌Qrm'HS 'ˁkfm^d٠UՕŕ2O<|k?bLg~c0ʳo|vQqn Y ԳuMPiɣN݋. gY{8+〽=,~?r.íaZ"Rt_<(JPٲh,ٜfuuOj`s1#T,\0\1Y˜ MV$lzh9d!l"5 }[}lKT;d. D # 2gtnwc$n<%鍲a\ ]SVbה{®)B. >LS=vRG'EEM FYTԨ% 9DPݧXKäJZF"G!F 3x%EՔ))ZEnٲRP?.]n@eEBA1!"@ц3LR4W3dJ<'_u`ƋsGt#yiLkSt҆  `F9E7×cMPHuTXfgm퉳h&ytY‡^$O`d0+9T6ābYɠɠFs d =^h# *2.׶&f<*ځi5N@LVh nQ6r۬gs6?u~ \Ul,ȎsUS$6 ڂo[ &ԇ# 9?ӜVM"PwpBh?86NĻ }ax9Ȩz8]|Krr甇7Å_̯P^go8tЀ.TbU^&GWF7^㾍}!4zN]X9ܡ_p]]Z#_ CëGVskdnvv-L.nNsTmΌj~]ukE/E͓7W׫dJ%2$Ggr{+j2B|rocD$.a;*E0 w0b͜զYk{&F=\zk9W$~3\纫QW1n= euTMaw4B bP#roCm5Fl3uUdͼ{R.bsꔅFx*-{۶ģ|$=8W# (@$Z<lQ.{~?C'L'z:,$!4PŸMп .*e-QC֐!H|htTyb4cyճm/ܔkw 1מ6HUR$pEIh[sI_lȢW`oN#>Az>RCqX&>| h$5LYB=S$eΗaB`IAL2h+97')gْ/jl #@ܨ1*ڈ.quD"AС!:‘JCz 9o5VЊg 5"j@&xNC(|_F47DGSG?' a@)CLTP" !!Pa. V#WH@Cҳ("8밬M|!e&N.M F'4hgQk@ DJ*TWEDfዞ #]Of!Qxcp|JaTAMRB*YJ_K**6Q= L/;QȮ! s-3l{Ce[/ 6$rIpL6ydC8M־OgK$$)8P?0u"p|9sE /2' :ߎYpmʣSFؘTeÖiSnXS.핎~I1Hs>g$HyAE]EhǨޓO$GI*|Q&ΦUaEjޕJ[ :z_|>swYnPZD004hC6, :hM>Gc̗W\0dcJaG-¼vHG v53H_r.kڙ`goo)i?Mգ?4W>+;WdI5o &/&nE;X]ګ wmm<-ĴMk@eݤ^p̛Hi#5ٗL0ϣR ڞI@x;-p*МGDbu\ -Qϼ tq!f7So@HWݸa\hgz&r__$t$Ger&hɇ*#jف#6iq=A; ~'Bb@CK i\sbՉHH G -~O3]ǣ+zK<12EEEN9gQe %e@8m (w(yHl0t.wA%gtajxs}BGQ hdHqa\_S56-zxv[ᐍzQTڳāyd!{ Zk>  6=Nj]n>yyJl 0JEu6:_0ᝌ銾^~8Yq9µg*p)~k2^&arj:n:Nh=x;GCN'w۫i1Qoɷg1o" b3ﺸ-_blE}79UR o|첼*e% *$7[}}rDC ^h>4"GQ!\[d66`[F/邨&qm_1j &1/5/Ǯ']s~r;mSG7r98jһ=f %1ŒnOdH}'w7>Wed;}{ V/вj }؍lUQݠPNo>~:00B.+5;irtri~"{*5cNǨ8oؘ9zOB\0j+&`t+r6tC'F)LFDWgQ7̻}cvB̶!3A]o M;e#+1،D1kP}FcsqYv6%)Vr4pX@ cيluR|gI ʥB|,}TA D}8)>sZ(!jGcp[*P6f:%,wVz[lF6Rjv}9YgmͮU7c4{k=[Ϸ@D!K;0ڪ@Zx{˫foh|`Qk}3vK c%9i!eoLϣ'=>3vfp0N7>|24?/yb:[{WV24wC@No{Uޛ?_~>SoCW=oLoJy37F|v5 a:T63~ ///!/_q󷋄[s~@6a8ESNZ`?t 셭^< ]Ϳ7{:+3E8bg<5n-tA[/NşZ?ky( 9&M)q1X@qqP Yz JDƒPG"(Y5Ti :3a 1eS}I6:AI ծW\[/, !NOPE;!wl:Cs/u m#I8ɉUd1[BQ2@U 7xʗ|0Q+PHh&763s *Vꣶ4EK(ñ:[ )+ٻ6rdWyY`Eyd7XLfg,y,9Nr[K-Kv:qbu&W,VEɭ).!jZf\r(P0=D`_P&Jwf]%43B}b-M#01` L>ĭN?@˭jet,`\;F %ٜw\(N` G@fs !}w;Pa&U+qu{ڽ0LY߹|uJCPfb$ cb  Fi <|L_TSOVrIu9L&(mfbv*F%Q:{ yd*$p!ĭd@KA)$ɩce@k Bj>$RǞ#EHIV2<ʼ -Orn.]rűCG{BGi9)3$̕rB1k#rKRN *{ժ8G \H}cbYPLˊQJGb)G@L`ae2}< Z%}YfO'~ϞYGN} *N?AGR qYeJ4n]d@#;zGRI}m TD0qk 7d b)&jA#Y wuwQG%KĺEtJoFhS'gj̛k){@$4XeX)!-x,&`q1s"V#sAǹqA<уw6(B*Sm;kZJMkV9d3垘%-wȧ)Dָjq-JULe JV.&CJmv!I##aJ9Be_vBeDA-ѮAYgyQVNEaKyʂvN;§\eT]s5CJh%6m[a`4ڕHekkIjd&Qń1bJvb YҀ" yWB/")RKM)X .VwVim7yu7Q!mMNM r2J !XfD ]gPgU|Oڠ q3c&j6+A "%Z5D*( dOI'Tm ֝aM%a<ؚ|lkyw&`Vz Q/!)ea&EX >kL.=(5hHZDz:iUčQ+IMe" )[#PU!2ۓ.YcPTJT9UQ =nA!dx'MnVw3n&&mxޠ4$. |*l\-z1g*})dF\g1w1]']<#R$&S) V3: *8Q0%>h՜,tb)۝>ke Ѻ˸*KSGd1iי%[":jO^(e Y2I̓:TppGW~*Az~Ťk[^NHuGˣ:,y6">{I s}O8ֺvKɌ&kM^ s3֕,pS\Ï3E{sAY#پtmn]|N2] /V]{Lߏzĭ3 ҊtOLqU$u -9{+ṟ*yccbGwDi=8Z D hfX$HXV2\ڐ䙋K3*h; ' VEar_tc L,KRv\T VR5]xKy1[o8PbK3ܳL5Pۭ"CeZ;&@DK`D9)yAb L~ $#FGpQMB-99ڮ Zw6 kz@Rm{)֡OhC%ڥN/?\8U*=e71j'kZtܤj1,FʨH ;'`ٛ$4.`d ;sI9Z%Y1wmOFW'-IQXZ- Y mWDsKA& 㧤V!DS}A+ʺh.3+Pk T!ft[K ,#QnNHKNڟjiD$2(sS[Z/xl}X8܋3?1:yoNoO޼p'޿O4PSyt~ݼkVEjӏ S\{\~3L?DZ}}kMDŽ8V+igX?,~İ|3 y˦3𶚦MKP4[`v]ﶫ]EMeb#xSeolSo^nJ{уG9ms$ƑZui Aup)ie-3IdC ;ΆqjzHAxmz"n[FCYkm#YS.;V߫"H$8 ٜʂeQKRv =áDɢD-ǰe쩞WU@fG8텐Q:]e9"i KϢac{:Mtyqq~(Nu rv9>eC΍E[`՘0mGm f5Oa*JRt2%EQ:LN 1aLѹ>RF}@eMs+Qَ6\0PmtDgL%b 2m9&kɒf3 @eChiZ!Fp9ùCqM<{/'%olпЛ G{Q|P! CSFY0?*XY*IbY86W#\&-sQ 'ꦬDif*$eudRNڦrEVdF @b)$QF`c'XΚ^)QĹb.EcD 1>I0*qC蜳Κ6QZC؃E~f'&zRJ׈!&[d5W1Ьgn#jtF?F`72fVA*ͅ34dr `dPY-8(,kdl3XGۘK=$ah9$2ZK`S XHY&#l Y;rvYZs"|XdW3ouLe T i :2IX8g Ȱ#GRo\Ai7Fh'`/"8dW鰍~AЋlyZ(>!05$@dM%h4K9EBJFg:;VsR(24ܧYrgv^({}{ϔtFPrP$`#(;5XN—ټP /Ei|m6%YI. 2(,ѳ; JIufHNiXFm"z;$ݣŰޙH_tkeܮkE0oL!VٗMi^>̿-eur@vrtȟ*C ?|̟C~| g?8#ξG0G')_YM~{ڋZR*F]yrSf]&!Gօ9ޏ3o-ݣ7? Зmsf :|6 ^w6]/caU\]NeKA]G㧓+^nzQ2gq1;/;_~,MŲۤڛ؛!xvRo sbvIju$NjQ'ӛ;;:xI9JEnitQuI], d/saxډ=dhqsl#žAHZ6- l() 1IZ:D)Z2R(o,hJwI!ygӓp߮OwlwO3llv ^ZO'V P0Ig'ͪΆ4/O3I 0c%༟F=O;n7x0ﴍ`+-cP}נ! 1dNCQ"eð$,:R)%MR8 V"Z\SM8燔PdF sխM(zs :ttY "sx[z?*&Jr`,AUn;]֢ݣC\x^BP熣T|[`L5wj,bFz% ]#btu|~2Lr=)M~'~a2.CKBc;);-W|a4PO}շw_7miTƦ]7LxF})d`Lot'D7o؅:jlﻩya21^$BAMJY!B*٘q( %"2 \$@IrFVF~AqV%n7#~ȫ$LߪnҶvBE.TkB29ZyM1"ж? Q[d?bR)4CH6%`H,D2c"gJ BCKJRBkۨ%8iHѲDd(Q%B؋ A{$&|CnB7!TH~:[q,Zb((%*,9 TB*'_l{谨Yz_hI" J@1XO]m2 [$[Ơ. (Q8(c$cĈC7Mڄj`}0F)hV`"Y{( 1uo%d@|opk>G3% OLFgMv;F2 ,$V:d0d D L:E D6H` ٢Z J\OܐYw8QqUSo]\zKqDGτN}gX\]Qszf7D 7\ugAuK54B} <$Dd ۔+@Ҕ-Q@N9= D!7DZ>j9bm⌱QXk"26S!S@:9CZtڸ;>AG@Ak τn7#y&cc/']%KIWIT%ƞtYO:iA?BZGc-^9l7>gqz}Uǟg‹!e =u$E^wڥ\.P.4Jϡ 2 ASf'cb&ZF1:6jq=9]+L=G!{-}lHuo }eo{Bף1ub~tϭmd H,'oOԂ 35D!e.6J;׶k[O.oF2aeN{soE|+zrZthi_kiyϻP5p(M`qD I:ɱ{(BvIPZ{X%SšܑG!u"Pt4A֊LTPOƄh-%ƧҔܶ/yErNHR!4&KUDJE꒤/'flPީXԎش=|%4Fm-ՌDaXd[%iDws2r=%b %ȱv` (-w+o){e&V#M@`lيuYw{yjMߟ97ԬQ3>ٵH9d*ꦮS2'Pm)ԗZX{Co\Z"Eq8gId(Z + $irgpXÞ!W*bՓ#F_)b<^JˢJflF#afg (ƺG]%/e/.*2Poy6te]r@OOOOWX)ԠĊJ D#KB/LKu9u df J(R!i9bj)I*JM ؐէ;U"QF]ilJP7D/'@rgsgYmr$S% *oGI L#xK"gvQXq1qG/|UkW&7XP,SDiCt%!=  9E"D0 wZP@ȐzўGJ/YdKģS-}&\u帳bF,jDZX#ҪFXS(nvJ*)\>:QRţœү(x| wٮ?^D\jzql4ԧb\^"zMv2hsP X%F$,r:,:G6!3fzz/wEa}(ӇPa[;6rnQqwyo|QdWFn懜= ߁w Y[&^dnp[o(d~ǚ_ݵ`zw];—wsM\|ܪoJ93N.?ޕoG_5Ihȝei8kdclHMplS @U9EJŬw%ݠèd~ޗ^λVu+_B||uܲvBz,nc" !Vǻ8 q4l?E=&9 ƏjAiZڼ]G.mb4=b='^-Ӫnrd `x6_-UPW Z9ƶ"WBZNøDY!XE Lsy%RmB xb'i9k u=f\e^% +HUܙ!LD$4XOb2X1}0%OMZp;#|Ovjzv/nVBWz"#{ h#c{]biL%Ժo=pFӳ~ܣUk'y?gP0.Rѓ |C!#* V;i" % 5D/ pu{j"PC7kOzb9bCʴ&CL(l=wGSbr:|Mot~(3SJF pC i-Lsz3upDhQj`Sd2Lj.^!.hN[b\ B4D+ese&P1Zh.P፧:c\,DK H`$SRlǙ 'tgyz!$N:c-_\ UK@U{Ժ1^5L?L|l>xhfa*fMPz"C0K*^kfE4 `ՙR6eqz:^@׸q9=?ܼdȂ;LiT8%j|;Z=5d:J8ӣb.e|v v$ aRr1)&ep[JӚٹtxgj΃^q.+0 5XR;TTBjo )G`4qG2E]ej黺TVM+D*Kq4*ıL齺B*5U]Au>h8>opE^N-bKI~fo|7g]g.cڃgs#~5x]c"uBT 0B<n689\!;k4[MTw=w'~'Oͽ^AMb6Z=^*p)Er$ " }O5~_3,$rBvd04yp̟ӫ˛ߣO~'n6N/s5}g"la 9XɥX`AӾÂLety@ij6mQٴ5flښM[ik6mͦٴ5fl#+B5z5flښM[ik6mͦٴ5fمDS?@uljjB@*Δ*"} 3aʱABB*Iy;frѻG3y8?:^E7Et)H*I0 /)x-ӊ i3R$@hTQN@f둜ԑh\bS,T9TpTB1roUeڣ\O2vKߠMA01se[s5:G cטyfmХkMeGPF6wO4CfC[ ƒ2ID$(>)tb<<_j$ NFr㵲Br's$^SMXL v@QҲ$LB#ՠjDO%cTZJX Y1rvV>bx%di')9%N"F-Ać1D5D$cǴp1X#}"p'WAO2E hL[THQ;xhD:r 7 A-?V?Iw{xݞ+U mՁRBB(P0) :P*DD XJN|UE4>X!P8 Js\xBhKaA}#%1¯$yG9c|t}C7A?Lavx.GOOCvHrT H&l:w6Ͽ)I5--og0?E#B}Nܮg<ׇ  ̺#jL0 e5(: w% /yO.%,%vKKN=ڠ"e}݉_:nʮ> RP##5J4ޢKJ)DTY$bL լ o]>.,ަu&:6W\ 7 ~Tbտy%F*BK]<i# t&/!Lv|{zVw,:(«VO Br!bJ \qbQE #>1 $TSF'PG꩚k<1x  :r8::n6I*֣2ΊyJ@f?P,G5K}b}tUpjQ&75PG,AFu A'4㠫^=zouZtp`hQ_J{vn;80/ ^*ij/=58s{jW5j|b^uǬ&OWR)}%%b0il܆pJ4=/ pP$~q>ߐ鷃4_/gܩ_/ك0j3? g2-rǼ;&Mu1o[+ۺH|IH# 7:R]|¥ 3QzX2DH&1{Fz"g\T5K@̌LeҴa63#7B1qTȞ{b NL3M|%r%yKjeU@j!%3*6U*U]6Ef ՠڪW\^+9lĥ taD鐴"&n<LA,L"HrFhTK #>+k56]9:e+tS/gvԜ_Ss_ &!1<p'(oePZXĂeB*%ǬC&62 췽I :΍s4 Jyd$`HOL 97tsR=+x<]a{[~.1R :lڨ\aW-g,%ک~951uԏRxӑ ؄b5ٴ޷3/H\rT_{mmcҖqz:wP јoOú~k˰-Vy *ONZcCTdceHўh#RI˶hQX.VݼiP[T6Plb2 y]h{8^CߝcvJ G|2"µ:˃*0H}r* dӎ &^JzjYDht weHчREf)9̉LƋNc0=Gcޥj;60S/HZ+1gLhM.pUL#ӝo{T:Q!. wo < $VH-7A`-w^_gԦ+^5!; 'sOծ>F' ;Rz4\)uM㙛o;䬶>v5KAmeSCkUOyi~Ry[RDD 2F;Kܮ!K2UP!6 10HDJF_)=mDF-A g-&ÍkbUR}KY/UzQƃd!)JgY{Yuncˢw4hnӫil/~3ggg+X0V$JmQjdDRؘ7Ice2~/AfEc'dKIF.Lr3w+=>/tK0&Hިc'^j5>!HW.3HC!ˌ( AIt2Ys\̘ì1抔 "+:J$kȨFhHqRMY'ɨFR 6̊`}dcX&iZ} MDOd1R 1+O:$cF<0YДd9da (iwאٻӪ5̢.9tk$W6OX.|QV.P?Р1 htetՌ^ s4]t1fMSaX5@Yȹos끷]7|cjz(wWX_?&4Q`-lf*]'oo]vW*XC7mb4/Tg1kmo?mNWP&hU zރ0$e,T%pRYZJV%\ܫmoӲj-l߮e+vȲ=Yκ:2E 4Vt,bD ;"$fA(G_H .yY9,xuA9g9!0#-g` t7;5WBglMweWHRgdgƄ5IRK+% bitBCo(m_ԡv%'em*j81G"9 5K :qK4Ի@3ym>qA9 hN|։[Sb`j:+"5hx,$sLa %bIA)("{`F#ylxlM }-z-w݁4R;=t9-yO ~p3 I*o)ȶj2(_8?O^rr𖮼wbH~QE䷅O 6O?}QZ v^@QX.l:ɺcAKy[,<'eǍ'ֱF\"wB٤ ŀtoCo;61t"/囃v%WZ][o#;r+zܳR lAy,Hi{-K^93@{YݖlSvfUů'Ѻg?ЎpoHƳ/:Zi~C'j|AyRӝIH0t&Zb?J}nfd+q<is֯k=BC4:E=E8}^a_NLg//gi*p~bK i4Ȋgi2q%ii4= ҹw=W|7L.&ӯ{b|s |O~p߽L#PF1c㎔[Wxb"63 ̢7֌KD+VB r B &ddnXR(OX:Gs.!kT6fhdZ~v YnB)qW=CRH$ͻ3߃ngмz<1x;TuO׳oK}oϪ̜Ө)s̱6m"33qδ™v8S1eY$k řJU@h% Baԑ+ƽv̔#uK*f/r[xx.r<J>O7DfZ,vIZُ3WvyHV/o~vkl~<·B2[׆3 րq `nQ4FxUr.E/ލgxgxgx iiϰ3l<3l<3l<3POI8BP ϰ ϰ ϰ ϰ `A!k ϰ ϰ ϰ ϰ ^Wc˯>mq'-3WZߦV)ʯ'_K^M۲>T.:Xr@ ϣ4/ 2#__Fr(Ӥ(QlFe3d h۳ߖ9Yif]!I 84ZΎ0H l\焇K*8ȉo8F9ޠ>37x3Q \KxCDtDT^z+=XR"@\ih ˁA(D-Vs`jABIA,em M5 i1"т4yg*-,cGow 9euuq~Bn !8 Leݬ<8ሐ/epL,0@6knKH2^a<`xJ#X# @ŤI!ǒ!VCc 4 ґM F)cՖ_7kjhE;0%9'z5hO&t5t 91 #H;OhaBFГ ,SJ2IU!w:67C_˿3 A;a0'z6DI62I'>:vy"^#CNdM @ c#6)WnԴm""/ԕ,!z(_5yAe[׬<@=`7hϸ0MA`rg/zJ:[H6zS P#9zaT]lﺎj_Wgy|c>0YS6G`셅`EiG͠YX"Kh1 H뢰C@8b :!][ {Ô LX]*,;cfi*I0$ d蜞h+gdeg[4x\ϊh{+ʆR1X"졣c8ii$sљ,KNNa1<6kzh'%D!P  gOp킒IEDMFdUZ]<5tE/8cAOtz/y@qVIwwq>eK\Sf929?zr<9#/ajnohzn;; ̹;xw<_yIqr^R=h5jx?h  К/SWz=r뾺b'Nݰʖc_Q#"5ύ@}_!$R EDi9Y("* 䅰eg [KFrT χ"?Cg 3rd޴LFF@ΐ̝;.uvj$ U7W T-ĵO52j|R/ZV ӦbEOGHߏ'7ʌ.[Q'Nv[X,Kmt1NѲz7L~0ǞX2%er`cJJ$jg!\q\^*]A& ʌu )q!J2bL EzTIc3:PiȷC*?l #r{^n/Ϗ̨*ڈ~.|s|z0Df\9i2>P,7W39RLg/2OO^vM= o =nmFݬ;TpWfrW}ONjm3{ed{AnuX%1ynK "EQ|2;y,fȤqHGFF%/uci}(_!y7ooy2}ϯ߽Wp0vp Xܜ_Qq5/ivpևw:VӶd )x7;/vc9ljyY4Ύd0V ?E?;G4}/T?_Q9W?'oE©gG6\?9fp~1IEqpxoӢ] JF 5 ԋ $^ =jg"x *f8:&$?ry}]4z{մ|k@˱ↆuT4ond<]v|P˫Yu8k=?|W.EBC?F \mQG/OV_rUn'B!WVDTu @D颶 `S6dyLI\&sV`RQPE>Ad.ş9fǨ#*im i xގn}>r>+Nt{gn3XGEyi'8/d*>`%,x#K5/W:XItf0c2sDL"KLὴ(,R*# PL7ӒIj!J,sl4VM".e 빱1$TlPs6c))Omm &Mnց7Y:V {r/dݾ >e[ϩph\Y@7>Gb?Mɿu fD#KJU(1(pKjWM#M*Pe Gd+u$jr޹)mosPڋŇ.97@}j@% j0AGtTAPNcy\LK!lHN;kd4Jch웺=)Z$ƄܢR%DԲn |6[MkկTr#-~Yg=ˋ ^>*m;r!wiDDƕ@JoU2EPC)p$;P&}$^CT*cW5x(#Җ806Dqrzĩh (Me&= Dju"G~d<˯NdTGdϜo;OkpfO">j8_^gkRP^-"xqŭv2hs` ]#HDx9| 6lN("^| ^ &C2P S5οq5:z;n~|GIOokSIyJTZCbDh/ܜuQDʅ QYi.HŢ'jtRZDrQu%&H09%kkf]eUx۳ٯ@n6J\lmo:QG)xS_%]?]woos:=wY5wxrofMg-_|wMvZ^s *Õq`DP)XptS\g4l2CNtsVpbi")XȩBFj!iH.ƍ,j:q$vCKs,< =JR0@/xɳ`R)Mڑp.xz̠DHP:x<=拱C&N ~?uv9UA|%XJ > IJYhMR&=$U}ݥz.ۺIYߑQ8g0jQEsu@mG+E@nA 'e( I+^Fl/ Lr+qxURtZ*iiITUwI2*~e>Gl+D[dT2Y˂]̀L6sqO GRHmPTа#;_0xS3ǚ/gt.kv]nZ[(V=[NZ >x>0GJTCj-]72J1&L"7tUF+i*C4>ҕL#ʀ5!/t:])Ё4W OmQ'VdطG2O<~7 Vple:ӆ>AW[pfw"735dX?~<7Sw'WޝZx!s~(UJ܃@W~5u:VeBɑ`G4zѺMAM#`'ٻ8WRfFD~X0sH#Iq8#Ȧ.w@HvWW̌|"+_K[vB:MOG&\P2+]!][}/]MQVCW &Z&C)]!tްQW_͚DJ9*]!t%ͯ++dح&\k+DT]#]y/֤&ഞbpKo -ѡDI* ҚXz6pLѯ[9t(I7p#]]\`bp%&x (1U".}]Э&zO'ZVg04/ 1qouT;d}ZSEOs̈́˫??Q>5V4kh޿2њ9$v$SFLXtn?e'D/ ?>a7hz74핦;6z5ՄZjtt5QFtutbrVDWL/~OZZ ]Md&JV:FBǰu5@^dtut`VDW7ί&\^ ]M>:]mo~$JWGHWbk*8&\k(HWX1nEtjjM`1v]JWCWш"`ֳa¥h72LA׮ BqEt5 7ٵz#NWյ+͖ydI49I'dxe,[;vw7FВ}[i$?g}A%tbS|JS_nD&VC@tdg1;?V;tsr{̏'?;?-ݖ #i"}o׳O #O1~m`d/>w7ړ)}rۚy#dO<ߖ|ZXPWfgt<<!CCCr?r׵+ }ח#2/ğԳ8bDNGjYlsؓ)z>fx+0ts=&ćE^:Fru|3vcf`.6g.gF՘do{eG)) "y.F$g6IQ{0\JHɴTE\Lnq$R ,`q8B#ŚwBkk0 a%Zo/a VSEDzϽ#B]4Dzo%*b0Psh) K79OuH#CXeإfC2#E6>Hi`AV-*(.-"wcT4~*ӛبAK&2"XK2~=Ԡ ܁6FnTzC{- CdܡQ lV[O!@YPR4 ~Raԭƒ]9 {`&SULΡ4*)ԙpCFh N 2a= pX +Ttgc(@Hq`Q̇3^{My'D)J6ZA!N d[]T%Dj{ 1O(`Wz,o 3$$ g"5غ(>fdYr>! DbY4쒋=x_Q>6qC hA}Hԫ~Ew 4D|b̠ʩSt8!a%`o;OvߙxaS?\\gs\ ޫAUz 4-͈ G 1=*&$td9R jmL fU cJl/^]^{22 .]< d-،ѳS@H3}kwG(8w6,Fnk9&F(γE9@[Fm1fePN3zfy4$^eF٥f\AI'? 9HZN!F +EtNJd`SE;8F l R K HρOlt <<XAշ߬7al쫀'V$-nsJmXF$\c~GA"o Q0bp\ fYTQc$$.u,FR<&,gu @GmJS\mqLG*19)HwNq[IqW 'wk>?;˿6gcsͧWc'uT5ԑ81MPήm@ISJo QBޟ="a L c~:[,bMy? ȸ<DT.痓e0]5׭+n\|da|y lli=hm_i8=~ywg𧁊"9i&3  `i x'oY(e&{ mNZ%ͳ L4TpSHscQdtO7 6gq⏶I9Y*UuHqQ;W "j/FFDр\*ju{Qۋf@{J4k{WJ~Z\5RKɉpLeZ+WՃŸny^b z\]kSNqQ6ѧS<*XD/|ܟI~yd>[VzȜ=22!HЁTZhn*#""L`!4ʕ:LڧxLJczLwL MmDBG+Y,BƶWTpǻBTĂ+TZ+Tm {W R P.Wg?Wm3hz\= 6v68A50jqAO!_-gpMVa#4֫-E-7g[jFE q}&< FT8R'j./lRSyqߞ8MEW{y>Nj<5 ;9aBl5?,a>vKPu(e 2;օW $[kx9PiEg쨴# fhCn-V^zyAY8bDGLjkLSPv̜\|')7r~MSY|3H^V䨕afZaYncuAmA y}QSd %Ѹj(X\5T+i]5Tv.jJI"hDBF+KiC,Q=:+\]p3+U ^C"@%Uqe4< ؐx+T,BQ+HzwC $,̲;$Sj}AjAmL x檠\Mc*m hTi~ﯧf: F? frO=SmLmFlg\vjKIQ9zQ{njd""L`%4*Lڧ1iP1AL3,c [ ᄲXpjOELd=:+nkT+ P T޻$Bf hAT{ET*Bz\uW MDϱ=ͱprm4 eLWRWĕZE+ W(XpjE{DQqE\;G+C(ȻXpjUAPxvE\Y"U k¢54ZC[Ԏ*qE\qjU<{5I#I-KYHwYN *9Ÿ1`\f-ƽe28ad ;sv >rOw#ji&7T5֑#>yPQߚuSQ3hI3r^A ݔRuDFFikӨcUrcfTZ%#m4BĂ+TkZ+PɉqA\qj W(WE+[B-[5J0DB6\\ML,BqI\I gةXBZƂ+Tk[]JA`Rh@QxW(D ZKTq*[6Ƴ \Ac Q &6\-VP{+Í &"\`ʵV U޻",b[6h//@~|~7hib@m(XhOu,'#kt@K '^\AO;aZnOf*U{dhg=\w(N/9< TDFFi+ӨT#t1[rPɵ&\ZFuq*yUvW)j# V&\\b}v\J{W]ĕV@d*\\wji;P=:+:"\Iũ2wj9Wm+1yW |frWV?D t?SDǃ++D,BOq*M vWF [@żz\uW89S7hi0ڼIĘ$fKbO gfXwG4{w4Yp&6Y<6L3xhfРZG4_MDz6 Ru$3zC7S+Oc39߭k+Y/mm'eF(%VнO75HA0e`r Qm4Tt1 11 S"ʥ2\Zfێ+TٶUz\= %Ƅ+D<ʕ&\Z-ێ+TiYJXΣjEz4W(Ffj%m=P%qE\I+ W(X]\M0jm@%}CqyT+e!` 2MS;TWRU'qeBF+lh4*Xpj amUqe .,Q1 dXpj%'mǕeҶm/Wσ+NQO8Shm~Ij`FKq%J[]zX]1cN)ͣؾ-ޜ͛7=*ƾ2[Y=I8~9 搋-kQ^ݾǙ/-g#^To{ë)jZ2`m-+"b,y{>"!g.Fuan"z~[8 pp5r1Ffqc(͉ȼB-x`<+b< >0lR̫r'0P|.]]p_ `r][?/d37_ZUV_VPPUOf8'4YuX*£On j:Y_/i=i"լU¨QUIOa9T1<<ߤ;Rw9N*tɍtR}|.j_\.v]^xӠCZF9|oF!oJɫ)M*/'|(YWLECRJlG bσ,:y(ƒΗh8LGGV >'zv;JdQU$vT?C*YM9,+Y!eJ!+]A >l bR&^S=bt$诏p4nztjv5Mk.(Z.d>7P Lb~wJ 8E3Ħ9ZA̬(e;Ýq>eؖ] 9thpGH}u/;fs5(84{3 p e皒 3&*w(*s=deB"(@U@p\A^rn4Wx;[~ڇt w)A6\~n0iauyϣ\oQ̊űoX0D3PLX^rkdT;+$ D ;j!Z"t(׳Y އLBALIǠ@Q)9+.gG.GB&xګ 3OYE08-],.U|1:ԲUf@P}UuՍ6{ď|[=/ś4͘ ][D6XXAT)#'TS Ts~M2_jc,#Τ,԰S 7dtq推{RtD=HX6'}PXЁ2 Y eMZ ~K'hƃϠ?j:QYΥaRkBb\d8q.Izċ#7M΁1ÖiM @%cf|5\gO89 m{_q39+`%Nf{ *#Rͥ(sp)9gDxg6 p|~dQd.}AU /:ܪᢹ[W>gZiat mp0jyJ 3Aq%@ 厭VkZK}iFa='7q 2t! %rEp[d9#!#[Mo'1[sUM$!60KD40ܹ: BΩ$RBt܂o}yX0-p o`g,F@" yf̊:[3|6T3?`<ֺtyf%,7՟mK `!Wtk1ۘ%cEaN-kFuK^5n#! ]dįW'mrk,pƛP6F .i<"\dσx1m6' p Tb.2spnT җrgnzywhCa,Rv:s>\H$C\#e}mCpyq9J&[#qP ?"xw&ـK eP}k7 SƄ~Hꤺײ'Z;f;(VWpb2~^}Ynٻ6WdǪ~6`{xfA _m}rᐢ( I#f^N`4r>ǴK{F%@HslGNۆ̫i8VyK,h\'ZN. =sx:mmQlm,\:-N2xRܞO`Ehl0{k띊Mj;axH7?Ϸ߿{߾g7Ӊ'ZY %y=H}wnXX%f0,EqBrG4еY})MV0('D?m%4HhyYJDz\[=e7Hjho1@Нκkr˸?HS]Nc-B+ӥiꋗB@[dCߒ,x}V틍֜vݫԤ>_ X"ᖌ*0T繱EfhUv9:ru&FzhgBT5|=$B <5=n;x C'ɀLڦ`i.t2@epQkkg A 'mNgfg{nh:W aZ`\:xss+6sZՇlÑ1xB~#X>vۑv#I - ߝ_Tj(CN,sE&U6hSI¨e%!z|Q9e8"gYd!Nt'H:_1b[q"f">*[CVRtr3 ̡ З{"fC6&bϹqnrd8#N&q c vQ~RĐ02Th ZCҬzT fON6c2Ƃ!PfLɠD38+:ّAܰQ\щ1(IGtN:>"$):C]Cf#k@ehrs׈>D4* J.)_\$i LjujIo'v-'L$"}"Sz8: Y0 k-{4LDZHNl̀!mC=*4GjEyV+Z앚7跽}H JC!fh՞.ƙY^ڞ2V$VB!V:3Ba܉M!P{@k!77/A-T*scK&k0d*J[Aor*!ܣ h}(wSq25)dz٢f0FSe [g"mF-j,H( )^HZEO/yI)F.7qW" MpYcY;KN$OQ]8)j\/nG,9y'D C` H0yp@ۛmp1SKLG*f.$\$25^U-ZxvW š(CۈRԵe7 eoAdGRB0p欯8|J'.9EctI<Ēn 0ѥVC$0D/o/{v̊QkKa% @ &)0Q{7x;͑\G^ki8I_JϟaAOe^0CӇ0G27]`Rh'&ӣyR4=ʮs1Yi'`^v{;+'kaB* 'x^L?$z). vmwwul W,\Ө.e[&:)l۴tٝC2ƘϿ -XrՁ. |72來_ {Z[mkڭYv+lku/f8J6 {Fۊas=2t֬/n8br:a<`PNB/BNgD1#߷EAyſ4M㿏g%#rT RZoq?|^t[TjR4oX)u\=z7PԘ+NF-}^ms3]ls# nLR Kj"ٹeV۷ΨM܍y5o]S]lha7]ztVY1 ]x2zO]=5i2옣e M]9B' DsLt0 ̕<ƨ $9h!RQ{\:qO)Bn":&îA(}ěvȤɱJFD HN,:[)sDjv7XiFFX; ^VpeŊL5CH_'я ѫѢgEWs]É +oߎLƳyH@/8]U`LH}Zm.}y3 E'hW9|EM+M6"ЇȕăJ0(g~x0r`LG0r{bHL(/V%8A O23W'-gKpSޤR*slJѓP18-21Nu&epVy]p2:;#g ȹl>࿻l[ܓƱF_urHI$}rlg7qh3<|"-õlh^ _ױЃy篚',^~LOFӹ}E y0`4Ci4 2{U=@>yHd&Ay Hšne DHтM'#Te[5`1ȐQG<31;vbvF|$RǞ,"E|`G@6CǯLp@|mz8#sJH feL+GV|ɓ؁Jr2X]qDQI% 2}zK۫DDT"*y0ȁ-ŹNkcBb2b LInң9g(Y|%k}T%J1.rAX"d|6KZJ8HnN!*Ѻz%[qME/!uZP#iA7HӽsG:K h&p+HI%%J\V9[)25^ z)OI=T)0!ľ H+r^ʐL fA  2RZPΧ: 5fƺsҭN r糏 .WҕK-qpۃ,c T2RV}&cdv %EAeY,cZ$ 0Y:Kc31pɑ\][s+}IeG¥qSMK[yH\.\-FU18EЂkז8A5-Ns/%3x#q>v~6d]Q-g,,ג03h=Q? ɘ6$MUA6@$Nzq.@A JˠDd_ eك!2sUw{p8\ xAia0O)5QGݧD;r7es!L!SVZˬPʩ`ނ֥RR>@ZJT#e1g@2>9,h)D<NŸZNѾtT)qk%xe>.'=I5\^X\tH2BW SbTrK UEb\ӌ)bˡ-]]gfܑ~vqˌ}ӛ{M6wƎFOG rŢ"qJuA2+eDBE/kƞ"ce(S1TBێhF&Js+%È&bbiG/P{ڕvh GNM"!<:,gI{OBFÃFkr(&!N HR@+kШEDXdN"FuTO8ak/Jee` "ӏ]FDZ"96jǓ..('S (Me&=!kH"">gMIEy@;2eqe׉*8=gΗpWLiġvE>uӒ]qE^q}v2hs` M#HDx9| #P3DV\.iǮx( a q_\~|GF8fo)~ctsɟF֠}W`XJyBq X#cCjrc4=CBZ# Y8AIm8yaytp=n: Qz߽-H,+ԃ~v iuΧ."EjZƘ~9m6?$ b$alunV^mQs1(@cM=eOAj80f1@pRZpeJ'˕cŕ,]>(}rD'+J #8&LƥzJv8qk%SMwlt^Olty֥3w^ʮIl@&8^'Iq`K!Yp.%d*R0h$ $'$ ϐ*&hȇA3C}(N$Τ 爣Q1xΒj}鸷b<F"!AK 5yE .w=*9N܍nWGqdſZh*!\l$F:}Ïo^?~>P?~|+Οχ5Dd:VR%]ן_éϡk7o&jot!qAX|G8_VGl:/>c鐿yȣJ5-i[`vMWho_+.硅޷8ZKBwE6͛틵4G~n#ܢI͕DbSFLpC1Xg [2:8`xNS-;#p狗<hU y a;HBZPN4r&lcT 0$RC(m 01:%l{] ҁJLK -wFg4IFZC9唗Z)}6JNy4.jg1)m#J)ޖsye 1"X Sb??kgzXr_}Tw esIE^ź؊eV#k"dRIea\&b[EvƫDNH5a2>%kꋅp'f3,oƂ';;5C!=X()q)Q;A ɸEPr9xF ƚ@J"P:C )R2գ 2Hu@;_.<-RCQĹeŨ !'A8bLĒ! ?,Y +blsɑBL |+?r`0\Fm m)pB@)QFLTJ瓟4I,+X"XY"f,eDQQf&E$ZUi.1Ͳ^:F6˶SM|2]cm'( Tp&g TYbT.:;5׾h(6ʮ'kZED'/z>"5>T0/ˆ2w)M$xAG)ڋnLxx"2, !o!eA$Z:!aJ6ydM?^HH٢y@pp%ځCʀVSyZc.t'Y9۬\ USD"{(:*ݠtey\+֨U傉WqUXe@KkR߀mmo~骍s.xM(g d4)E1`1C9ۙuFX+f2.ƻboc'X™Hr}5s{vo^ɿ:fĵX:f6i=cfrCgcW~%]Vry`( RI9K.)<'<ӣ{O 5un:/WG^ȞAHZ )kVDV95*n t(ɳyk8?\z=n.],GZ?ç qT"dY)"a@ǐ0[hZtM,ty|&`x? qtΉC 3 +5&@ SܿK3 hD g|۪z]| Okx =<;diV[$oc OV> k "#$R⅐(TST!R52ҩL_<}gvu1Oc+L֐.{cr ->aa X\LYOy] WFjP}q!Ҙ +hMB&$**QoQpH:dE>WcNVGXH~aw=  HQo (!; x5CYT8mYgrM$ [$r;-J S} _Y, \:?_t>woRMu?*ٲֿzvyO_oeUw\h.uoXeñoBdLFj6O-O>^xnOMe3LZ4-8@&KJ4AU5bmue-)%p>Sn2:mu3+Q;:@)%U F6T:z8^kaTS#mQ?|tG^8zqQ;[ ;sZEIS!) ݿ+ Ⱊtz'R\A8:;2P]~މnNN4Smj"#?8.$CGDK*G u4EϽ*}_7p%⃺f2JvSX;!1K(lR1{ŪwiR;qαLVCЎ,3T]6k1L&D BUc)B6* kPeU]5 9i1i *).Y5Jи|]-cHj4&K2U,+cGq&%$FV!s/eUI-0?1װL8VMW zCho:O0xd׵C4B-0PDS~qC_j3UN|J R+K7/ָmU*30ȱ$[ T"`*2J.%g%[4hN{QZ|+p?(RŐɳE0&()7AqKzc-]ɿdŇEֵd}-ܫs}7jY~8WbG5󳣳>! -n;;P >Da0vpQ R6x&JJ/E)%Za$&vek 1c$޵`VKWJdZXsbNjp'R+L(05@;8)}z2_3+A4i:8?MI aj i}<[C?.Wٴjypl:#Ǡ)pi 7]Q.tJ}Cۺ;UR'.s+Yp*Z*Ԫ,mecLNA;k@,}썡ꇞݨ|CrqF䐳=SzӇdKHb6floOUKd #mu9ʥBRQmt#BZ"H.b!"Fkk `)(FUEyo7`Wuݖt׻ΐپS,b`;):5Y~9O 1/x9:8jpŅಃRb*1PG˟q9"QCHYak@KgmrjE.h'c89@OIvDq^}5QS)Sd&\Ѧ 5X+H h7qt(wB&= xG)zO-#Q91f닒6.S A'rJC{>8dAJV~K4IX>3;8 uY|ZZZʸ8+sP>e}p$e}._ζAɪ~۝!:yvZU}pʽI3-JתKGWTQ D$:к<ز#AsV YH +(pX(W,(j) z¯xkuJ:بSꍌ9rD7 iƣPꌅ8a᳗)}1-u_.iφοjᇿ0ٷG 5 -0 PPl] 69F!l*Ph >eS%mu"NsbliiEต3oqTJU wZn/q#6.9@&Ԟ h%UYXAN(\gbO>śWBꊇk'$HvE'[@FL1G( sY%q {ُ ~]X~<ugD"N &'~ipP śXC;ʘb5xH]Qf+'qʥhŏ g8s: %hS4NԻ]7q#⷏E^5 . w%E턋.x!#W0@&k%kDFUMt(_g99U4Raűa7x,xa{-;-Ƚ?.4e$)~ =s87gB"ZeaIA/c4wznOnNRE[;b:{LIJE j|r5ٹ6)ͱw[83ugeb]W잩WSBm>KʢpZ1on F֫z%g9̷O*r#wC;}?(1y&3Yg ݻoTs}롇L}trU\6䛣<]"0:4%K&-Kt]D1?"vGWM\NRW\hJ ~P*0+boW/g[x'xy>e䗋z[&oj~h5O9sx*qHF0`.޾P_=iz#^]$7t]5^:]@Wϐ,IUlgfpjhUC9ҕnɢݧjpA[P{C3ϒ{{S g fOޝuk /׿|v{-Pt:=]!M [)ǩUN)7AѴDq"1͛ߗ?Ɏ'oV˓<NfqE %P7g]5PG[zb/Vd?|w]-ySJ]|~˭+eHW˴/Z;_:6+ib3mٹgrdŎSm?D?̏/԰{܆Iy--%:.)!CDWv,{vH_Wi{@{|s\ oN:$4rӪ;-A><0"=!뫘j!ǃ7"yzjN>g=zq{/,;g2ňx-jJZ䩷L \; 8wT4/fp+Z4f(=VEc6h̡ٺ3+]' ۞fVX# }yxfO3o40B ]Rh9Ҵ4+$n$Ovtv|Z%ZMӞ{\Og+{CnM;@I/i!K9KCi=ROaCe )ܬV6@4v{;λpj(t·ziJW ۝]j3p +fin#~{4U7 :byl4oW&HXדi^yo\b0伇/1p?[b-ca݀%+zrbꂛV޳~`[]dyb`s[1;~2ut <2M(ͮjyh3Gmʶ`_uC vfwQO.=JŬ\Y'Sׇ^3Z5VO6w?<+O=+c:aظeR H rslIX lmcc.[;WaZHqx2.\bH /Xq5xh\loq;ӾA_OPR9/h =ajp,Pگ SCa0iR~O$fp ]T;n$}gHWDƱ#Rw\BW yuEdW83+7fٌ6*~_'5 8w~]u3t{4`{4ܧ~3Fh: <JYQe\yF吗No]u]QCnV9nZ =D\h~غ8?Iq?4IZhosWY!>X,YTYqd-[z'ͦjw@/۠@Wo執0f&1`?kz'*ZdujP{DWbhp ]5 ztzFœ˙v/AtV=GZ޷㣎 :l trkA#;Vpސ4~o'j%PCzHj3tܧ~e3^= ]m҈{O݀쁮z6%O%qYD0nu9[[ZaMU74nh曡@ϐVIU6_uc>fhuj(8s׷N/Gb_d`c&c^N~99=}\ ޼"\'߿pq^wp?дWKYݟ9]J_/??jtAwHݗ?l?9s)o-7 ˫ey5CG뚶|l373y3 eˇWbq6(6їUni΁eVQ B.p5.HdЄs|'(xi/ǣYoC{o,_R0쩸3Y}`uP4eIWdfب.GU +Zd eHZ6Bbb*eƁYR"1N>cKFFbs ijr&[pKX FTrB -y'rdDkK C.:K%sBgxITB3Ue1x=VӖ.~*W!O,u!IX5iA>#XH>:>6 ?7{!UIZQb%rHT,s 3z[T흨ƀ䪗1(!YJz7&&/º⥄\@##+$[+R}h=JKV` bDDd/ EvQk.&JebًL`OH=j^L:%HdL!)Q)3mX43h H{ H`TJ!B QȎD ^m..9ʊ0"_F| Ec}`,و" Ktv *(:Y-8uT|T.ñ"/ (%3T|M47s }#j*A8FlUI,AS\0-! 2%U3sL)e]\`TV!(&z HJA6̹( < CPPlhARX\lUmxJ]rNR}2Ȫ#o)Beΐ|ա↳d Vj^ f؁0QfBnTzCJ d\Q ̽(P,LhD+IS"NJ$KUEԭ1* ƒknڌ`0bG8_ !.A DITȦ[C%ǀ:cr  E?X7Ud&4/ɴbdڛK(SaB@Q"Ł.Hq`Q,jYH( +:~?*]ue"RRk FGU]%-!%eھ5"iD^,.zjU!HHؗ"cDjW"2%Vc ȲZZnҰkMBՊXJΘ Buk+q4kڥ#RdۈY%䤡Z@Q|"5ip8#<]X0`Kci:}6XMVjZp 69!uM]&Ih9ʾԙQ¨s"m@2p(ߩ̌p,+!С`zAkO#ܰF@5;=zJ88s!0J pہA\4.iS 9$bRQD00eb12 0+K<I a;tA%w(Z@$Xklp^*tE@x3!z cW a^b\Oֹq2MnRRC$^9pbs# 7o6~0r&J%*~OhK矿07.E`כ4d33SdvxyQ sφ`DolfƠr2AKgts0?|icigSli6n6\MQ3?IlH#2}!xkroJ߄F22W ہ}+ 5!.^=H[1 drOb@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1]&7]bC53L g@³: DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@:X&d] [&YW@ lߙ@(P:D&P%̈ DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@v@`u $ ^2au *L R;b"I"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL bp@KZoћ1b\H w*%Oۼ^dzG@0ew+ktgK {O\Q .}2J;%x vݩjBiJ45 lxg ŵ~ T1.(e+׉1}mmx<\\ \Y{%\3pڽPJG"\9 sGkLW ھ ]"\yɭ^+|6KL{o9N>6:x11(9c4@&cPTx@y؜x2 Ӵ,BR4^Ũ>zLO/h]UQ (t2*ˎ-?ZŮJIi_ͻC<[I )s>\2nz5N{Ë kq|4EŒQ[ߝqaKo^|gyE,WjU%X.x$ $?b`YuQ WuUäc.Y64_ٔ'LzkVI)c7P$Y}`.4͜rxd9AGB-Iқ'Zk2jtY6r*R[nP[љ8 +qHՌR.7;N;fVYWG`SrO\ĻV󧁫Ii,Ճyj7OLfBxXr`z'MC P=x&{t޼>{r2k0#otE}c%q!y#CwQ`k:/3]iV}iRKiCp+:W(f]+k=P};$KJ:\vP\cW(WH@pup ]+tW l+Ӟ!•v^u~UW b}fÁ+S]jG]wA\xW b z] JwP\ݙ+wRJF[WNk!Gף;vxwp<ܻSTK>=;}1Ӎ{;'պ/gSbrЧ+Y$ ZUWJ0^_n]ϏuGuMFEu5N?O&yn<ѥq 28KA]_ʥ~N3|}eL|46$n˞:55kN6q+esQ{q(qmAMc'jgGЇ n-t6m~u5.5Tz9'}ZF>Stl?o eü6FVK@c86\TN[R[+cmQOʙ(Bfr֗8/htj ʨor|=i{ӶO.ߋ1FK޵Dz"Ѻ˲"uUj>ZLZ{ksڀPhl?eLVA_la%YdZbMt- c^iO! uP<ʒcjw%eU] '?ZKW^GӍ @!M[,-WNl~:Y-:~mn0/M;^jS/% 9]H,r L<~妗ޖibâjlOZEeeR7& k)\Dᅮ (lj{!}Q(Pwe\pfU'c0ֻO62}d*"Jak8s\"KnUEkQKR'Q0UK] I/3U l4N&Zղh-X ?AW״haNG g_:#WN;B ^.oyWzo: 8 +4oMt2^2ML_/f|qk|tݗWĿjHbuր1-|cTVX 2E&l.#՛ Kp}68sP܋cn1S67!pe:T>駶r٨J"K9)hgo2 W{Q@Y"NBtu6Q(_ WڄM: v}hqK&`E:W,>=c^+ۈy'X7]G'b>흘lD2'F7DlL06KepQ^it YSaQ3u* Wؘu0jk#ݙV\۳`v "hQb1I#,XP*3UNpS;  )gpl2Q++bDbvXg ̂;sG;ۿC[ޗc9X1hMN:C'l3%3  nLLײ"p:Id4;뿷L k ^\FS2k$fȦ,%3*Yrvjو$>9۳ @Dր2 R 0|6dU`#E]{"p8lW:ow6=,uAal--1{%E[W' ϊqfZs]c" 윬mw;" f &z/Ʒ3yߒUw*ѹ`u&qRLy נd33VվX4֡wl7A˖׏ #UHx[ȗk ̒[&$PnykyɍҿHH/{WF܇vw ,vp`e"_l'x߯ݒ[o48"ݬE&h3%/ o goi/D<~J40.6z.nJ4xX}tX(KƷn{$ 9[_wЀ<[dh3z@pu<*/@-^q$0LpD(kyxq9#0|;`i\`rP+c0ѴD(٠ 0,XG;ZvXvo!!wjI3w~N||{Xnso-vP-3M?ULih{p,acBl%'X̂Z*F<-VkCDZ4<\H7.ht6l )eI:#A ^SyUYo7G9xaj 3ZBA4f!R_MYLہ}+[dPP}zRT]?c ^o9 q>+Sw0|?&LKp8*9:LK("8-/.B4k)G *HshUôaZOR8ղLOf)_ve8ub=\{7%_.gCq mi-wBKŮECW?ns;].^6s׾ᙧo.K2}[_+=QǕJ\dl2#tETV΁r'dtruu5Pƙ(vbbDx/6،R'F-@M]d *bBƈ)q:z$ŬBb$y Ǥ -31X Vj9Q)H-MgnTm{ 9Yo۠w\}}wiM.Zɭ^Ͷlv\[*d !XfZYFۓ6Ibs\̘U0#抔 "e5D*I@48F4',N>6xؒ~T,>ED^#"ZYxi tLZ&0;.S$HkФ"UEDꝤ*I+IC%Rp&`(G'&neI<[QUFj9\D:.ΗCuV[%⢬rwx BVF '.j>139pq(xXmu2Y 'FB#Zc/8s;ޏԙǷB9a6%񎥆4IeѸ9= ?nY:HetI,+Rdl-B琌2琅TXkNO?Yt~u u=a;eݷٮw>q,˞Z^%m{˫3.5cGw>r!ViT}Ksy˦CnNSsh v\,en#7Len/|t+pw*c)S"4+ٰ&e˽Va3[)lŀl?;n9lREOcUHNy!hkBq5 Y0'"uS\+aDxuA9)0#=@-O8:sC~W^]aLt5kYL.#)KOfqF|nyzp݀RgdgƄ5(.BB"0ߟ~3?Ek;mӂ䢝afVF [9k7ď~ZUgI|ްFxek"\ ɷHמJGcc-]!˟J6bCvj׼m]o<:_\|Q˫e;}ELIwyG$MmVvTog˳I9Nj; fbޛIg.T:@P5F@hJ~&0f2`Ve?өO-O0I #sRSPU1GraM_Ig֠htO^h%! dEN^Z&%ͨUQ>ECi=oP-#JY7H'"!m&\JXFLk^%9ʱ$IOmsA"ܐH=$B:<@WY)BJL)*)#pZ2I5D AFc\lq!(\Xύ!uYT&H@xF<+5b3?LIExbMמo!])&H6DSzp9j`3P#BA`}^Hv (zx[MSJh%2(P I`A{bΎSK-J׭( 'ģ9$aRs(]М8>j%b|b2m٬v4RSL1.rDphiI yjrSq:6mBY_^͐|-^~rmIեP3`.XWHa2-^U١гg\}E,u`XIIP&sJFƆkgCۜǵWCi-T|ϋAFp\k%#Ϳ>dYnNVZEmI[KmuSKLULsx5 Px0P|9be5BUaQ,ݻ:Ooˠ&t*\nCfAk96JP %&m_khtJT@Aɑx =܌=|S>^'-}gw98߇ԐKbOm^Cw˘(m $:''$?,=E#1:֧d\=)":0.Rq;ё-сVP<`4&DNZ- #@D/ pu{pFµQBKM:bR@ڨ̧+`3ñq`D <`l}U >:2Q^ fZ G (8C97Q+%wL,8 QJ5"Vt 8LR/hHH>zpxϺș`/=Qz)H%{O+b&$Ⱦēdj9o{%; .zGhnͪǚdV~g_t.ko@&8^'Iu] 9̹v%J( 7[ *H<P1@CN E882#.,Y7Gl7uM۩9Lv/H2w2F=sW{ܶlL etGm/t.*Y]a9S䡉`9ɗQ x҂GBdRRv,U+ |)S4P6`y+wz=we67xguvQ6hqp>Rb>7k6h8HKjHäřc#f1\s#SM.:4ǫ%ɞs9hF܂1j&`v)l`L1aPIO'2fO (s%Z SKUۣ2Lu0Z| 7*+龈LmWJ;q ŕ`#qk{#2zob1ZC[䝸zJ2!x* j_UV^\e*U:ŕT}W`EF\!sWZ`mWJ);q ~PG L썸>lH\ejlT|J2"ox_; +{=l+r?Q],r0WP{VZ"u BT%0B<n|x}qm&+Ad鶓Mn'{{jiAO?/TUBV:38(R5rTNrRYO`hJshy DY.脒JA9 8%ܴx=۶zDp >|_ë"L$MoL.*ǃ9fyõ[xV`ǥȫGsS}!yE"g>F&㵗 4K+|Ge&6z^V#fa?ÔĿBfFȵYD=?v/Viνl~u=jtw;o9L '5*\yYo?>+b݆:6ٗeF[{3+K*U ya\x7pc%.,/pl1wČs`QS&_w[!gbߧWf^RͷbnEVܠr\Qr %?ͮAwYD{t> ۖ5{,b ֻ32~"|XsrH&ͳ]-'ms$qnʣO,RŤG4O}e Ec#vLx{JXmDK} TMRFU$j#Vt(X$$hiW@m ra:չZyt ̈́RH## ƃjǹ2D&L M i5ҴE]=8l^TkAeJzDRwȆZM"2׏7^[=rrc9j^ *#)%%x)B`;)9#JsƓ 7NF,e&Rr6"=P +=Qx–M%%#SJAi4 I0F+JNGE %ZAScPȸCBy'fOb#u"s$4%rp>Eu&EGиZx7B_UϥHv]^bZ"5=c uC w'$׋8M\́0J(')9ЎF(/cߔk$UBG'l$F:CG˫ŏJFqE.{׹ =;䂩u sTFMB|ԧbiFӻ(ZQcrۛE`qN̥~|ܮH1~WjNKۏ7@HH@jGz˺aX0*pˆ8V W0bAlGc.{&gGeQgnr;(㉳>^WQ,Bb8zDm`rsd?0:}oO?}2}zד?E'\qS &ba'Ӹù(Oů7_»碷NWyUUE'﹕h+*y0 ҏ p99) ?}eroA~:u7BM ͆YO`qMSowTP>eh.GA"?D\5]VK mmVo#6F"4pJ$4WuLM3DY H*FzÆp)Gf 6>Ч,IrKT2."h_z4.YjqkpDO(od:O=gh'ڧ {%A'0hxTڪ4]mo9+Bpch`pf3_,H-$'_ՒeG-m+Nc)bq|LoQ˜"CqFS21G=X&oE;͐}CW=:Aoip~#jҴMk@iD ;Y$ &/lZ)X+dK9YiiF@JJ&hӃ&/ o goiH2f bgpl?,Q"W‡`9"wz9JjLiy!gȞ\V#ʦ̐cɓ@8Z)UZ RD2F)WGQUg;#L+Êޗ]}ZAWqBP;#闗b2Ŵ"b93Er5%ӃWI&V%T T>..\-ahE, Gm5KTYI\|:ʓ|&˻w Ll\<;HdXy@]ӛtFmp7z,t>M`vQv@ ol7`ţN]OQ9=8u|x<ټܝ" 6}r)cI1:t* M,Td&/X54%typW SЧ>e}V|qU!CfhPƓŌLLk !r>rnS-  =N$25r[踤[ډLoEbL2W2賵$ %bԨpppP d&2') :vD:6I8 pՎ>UA;܁@_.݁+Be (&ߚ]~uv:i~{ţQߏ G4Lџ_Fl ;~=T:p[Ise}[)mz?z ڜGhvA/_xܧYoOHʱ?O.ϧߕԛI:L\.äi]a=LJ\iFxJf$3!EZZ|i=q9{P96,}Pɟ!vMk됩kz߉vHka#-&v(w_]Ѽʻ:n/wf7|9F?huӁX&QۇaxvDqtK,d)j)Bd1x{R>ѮʹsON.TڭA b50c*B5rحyDTc٭SbƱR&OYR:1Rt&%`vG4]DP?2@!JXt@)@(r9hW*jD"xeP˼nAwxt܍Ed3Zp1cdγCSYAX 9i.1 ݄ Fi:YPL.8,Ӽ3\`~>E͂ <2 fbT2I"UIM{IԟGŔqO?$shhU$NnR`2&+)|?COzrڬK uh}#p|rڭT]vBH8mӳY>.|JeiWљFFyVѰL?,(7PC(}.JdgDd—Z1B1kc*L*\+a*{ժO΅TIR(OȲ֡@ˊQJGb@L`韰2k&vJ^A [x]:,(~XEp>/D}g>]B;sUE} 4 ̳ƀFk4܆Ȍ=Ml^cӴ:L2 Ji1RA<Ϥ6HTLN%z#d&13ҲAFJ?Ec"e*57&WZ=n}J\>Ss-K 4XzXV$ >YLH)!ed- JsIs 򐜎y `HOJIډ7j]WǓxBj׾Yl.Cz5*7eBL]>x4M>?+5ml\RenURхFhH.d hPj{l9N{[I(EPɖl!AYgyQVNEaKvOIU6%Oƫ.% ˀчR\f>W`lcA)ܴ>>rKRkWB|"5 6ԉz.P.c2G1!cd)q&lLЯ`:d wo<&HIl'  bpZlWyfB4MJEߞظ=Pjd S`r kR^//6ӎCia.lU!wS?G\ ϰCOQX}[t u^Tշwre#T\{J_a<|7M-vBv/텬F-0yLR)_LfuH_ϫSWRI~xm~Ƚ>7wtþK~Y^ XػǫxD7o|`y?';0+ү5xvv\hW~>َHDH[Uz*;,~Jaj%hĐv>%)\$XՏ{#2} H)jrdW mwȐ25*HcM$>Wmna61Ea/& eRtI#5lC k 7<'=6b|?\a1c[g{#3= 3URh }'_jK8.Kau6#<T} Hq_^nݣEqgRM:*u^xѥ`]LyęG™8p3q@830("X,jSLAZ2))(zꑶ9%($jIfIImY c_m(եuu+qh<~.,xf q|$ܯDn_|ӖWQPTuۡcZ fī'«ګ#4l=}3}LJ032uEe`V0u k# &3c^4)np#c ѽsAO/U:9 SN6T/ӵ^w'MIZ;EA+(W:"PT#:ڠEʁCkw;R;di66d0Td):)"FŞ#2BDJm3Շ_T}s-nsaOV/~\RHޮH}" j 0QahdU.`%Ifj tPtu)rI|d2lHZY7rZ2ց$m+5<0jk\ok/Ltˮ6wcSx]O]W,'E'3)ZDdL.hBA.@dƤK֎LHLUkO%Tqmcs.^9ns00TХ % HXdդ-m#ժA:PV,HȘkrt(m"z'9suLAjV zwamv0'0[}N凪|NM*|*VcwX~B g䮪ƞZ{.JjJji]aVg䮪oP-9wjҎ++ ,>wUE}.UrJ rtW]99'wK]Uqِ*RUOqW(VD|UfJRջtW֛첾v4.2/n%!^h9"\_F[ӿKEGa#eJ(@{i6$~P?^ZkkYzR>;ULOk$R?5O Q:e5as%AEZXC=ڬ,B[>6M9v1x}szEG.!OC}Loa~Ϳ?K&75Xyԋ/|>ؤC?~7A7^_>xy)lD.s/>l;^]M$S밳\EQGt'|c0qZpP8(9lsA*(+BDb͑')Һ,&)e)T2@iE)dSd絫x2&)JQq@F%4!'%5gw2f6TpxTY%|[il]ޯ ˗rZ OZ(%m F]A!IH Ql>f RS[6Y1gH!P } 6٦w49Fɋm8{Ƽ? C-%)e Z:+J4 !Dd(Q%7)hI`/*KxB峙nFؘE0)J,̼ˤX#A9 TJūB1I@4aIy]U8FCZ`Whc(Αr1/U-Kl0ltSH@])d@aMΣ ns#;Z+!gڪ9[Ƅ$xag9pXw@e OYԁp#ab'l"ˉ } S# Y2 te(i0UWTbrAkb^J>u.5*AK6VtDa[.j, R5P)R]-dЫ6^}g;8) 뽩--d["(svE WaH"!FY{vhO,HiH,ln=cxʺ}%v6˒ ޼ku4+J+nG2w"2_CG'iуy&zё/CF?/÷i>^+assn@E5Xf/ B;{|~}sWټ 6|djZ8ÏN!xjNM6 ?b`tVݽ5Fwoуp&\JQIy'UV3P'xO)Fg!&DCNC(D%$,("?Y'ti81S/4$oiws̛호B-ziƿ < XDXv;xųz.` R#|g+}}pyn+T wQ6_GgFEsAˈ9[iQTF_CC#q=߬EBU2KOkL]|wu^]C=p_[}We"l:$oOHYavr%5IJZb-"H&mE_+~!ozfobڒ F%v0Jhu`*,Q(m/og)$uul]Ү665{Sr 剘Zc_= .N]y+7 P sS \clOm#|CKv8r- c!Zƅo" N/ |oKEDZcl(Pv3$I %y ^M/n ;GEEnCĢ((2j̫""/32V{Y_o{ g%ot@P % Ӣ3[gi({OJRetѿ*iU2@,:J4dkTF-Pب 8e{[ĭ癑mKcH&hL YV2AAADj"REIMie" )6FRO!ێZwvui5W.u&%Eٲ]]6lFd3Q#'XVh"XzM'r3I..v&Cղ=T]uMLo^# |]vTOAn3ǧz#y?gޏƽ9aT@kaEHR!S6*z7ޜu3!*HФ<c6Z2琅 Sضpgu|е`2DUxڵ$^bHmzi' sx͡/tEլi:'*]N2Fð}6 dE%{Wmz|_wyvy(Wߚ[<~~̻N!k97pXb\"b}פտx^3;$,_canH2^V$Ii9#9FO4Y% x+S9,DR/ל[?9~Bۋݸ sq8FAZ~}E$D\kt&tܲv¥tyr`mnz۠? أg|Nc:]nyp %@2tgd}ν_Tufz }>.h>Be#+%D@*K}lWCu#!)*Lԡ3D+V%@#ę*X 5LALGd5 %O=tT{eZ%]L_]L^!86(?=#>#m\kKpz{e6'E+83ytRPṟ*ymb ggڎL9i;3m'pbʲH6ւJU@JJ3\1c|=_k-f2G~B? Vv46$) WP9`\p{s]K׌(^}io0&:Vz H3Jh86 )uG˥Xٙ]˯Nr?9i !tR5Fk}$5pzARXRmܺNʕ2A+xJВVȲ0Y\>+'aPHRh`RdǶIZJD]iI7Gu߀2Xyؔ%h{O APƟEVVx:gnGptn <mxXUD!pIRv=S=Se:8|EUY !TUidPWƮ[[NXN ǝRC;}s,v4b.mCD5ЪL@rBkIBE,NZ9`R NEoSrْqqL@j mOaDa,)JWyT`]Z +:JGPԨgLџfr;+Jn見 o왔g OV9$X{"EV/%n!,r͕fMe0#r9_n;տ&sW_9eu&(Q&CJ cjgIwTJ^*emJ/ /:l7kgTwQo0x?xpvMN pKTl0L\FKe<)^ї-jlz ʱd 1L$2/4oaoK9sOfsuz_׾6#uyۼTJtVgK!jpQ)q"03/z`7|͇{UF&h^z0+^z'?~p 6tFe\v1lfٕ͑x2m?'[NƚZ-]55#UQyG*p<`ŲOƣ/޷9nknU.jX5si9L_'7T}7$8'|06kǕmj_g dzǃ8 q?w?|Ӈ~| ?ZqD3pjww`*Xv9ø4J~?_X\,Ҟgޚwc9 [=HPIq^i ф]'Vޢiia[\ vi׵9v]EHunJ }9.H3:2Tc9yk$_cʏK̑a~#1dШFtAg.u EbY{˂$B҉5 {ڛ RK Dn< D Q;ZzxZH>iW BܓjpBN+g:S~6z =w5Xg`;2u>rhayυ%';.K`Utw%. Р(ݕ(U)4J/Sh|)4h}ݾ! '1< %/T lPRhb <ޣp R_١pGawC Fq+eD!'d .{5h H=zk)' ,0.4Dm%4VڢsB87xm$}쳭<> މne#֗a\+/|3P/ ʢ`i1>eEЇQdRi0!qJ'٘d{mMsW  TёDk c ),DHT(>$r`ސPY' EԼ8\e/B;I[0AhDEc:FfE4Vk3NgugG=!~A]9#STAFV$‡BO7DDD aU䝎 $?7;#˿3 J60ZXI5DMˀZ$5ĹBCP8O $9%Y G+[gm7r V.t\$0&Ko0$RsYBqnQ1pR{mkoGVJPzӮ(_k"AʷUz(2*Zo^< vh.;7Z0Dg/%-W$zK0VhGkvC}wӛ0oTuj^[𗫁D[8$qGgpntІCJV 2m+@/ 4gпЩ&~IRB/٧xdII&‡Aw.9d(Liy!g=g]f x3V@(/j1"u8ߧhNSq_굃NWz)b@+42hy4)cEXKLrФTbM%Rcθ򶢼s%}AĜhj2ѰZ|CPZ 7G7dyL;uת$Ͳ5q`W{H^|0Ww+)EdL}jJNea\U+[P*$^%*e*t YPEKP1tJx}(}[yUMT+s|9x77N?{W㸑`̬ٝGՀ_kiyViK-JnIQ*I%ꤪ.("_Fp$XhicRz BS^ x$Yۗ pm< w2 ]jr]"#Ll($4ޑֈ#mk!OsUjHn7;p~2?#|ZTtS3O Br׫ ^1C"VVjJ,hH OH<d: Q ==|mc S)1*< )o@q2`=BqVduQpvFΟ4h>:rB6KPP!#2#Bbt;l,%TK}yX7U3{ф␕CL5m aq`^lH_jM]0Nr4']:BUٲQ*+QP2'H'0~:>md#b;fRyäVRթaKUG)+}.ʏV0qɎڎ <#jבT\l7IM]gut|7EC>D8e L$d,o,$N4ߗe\xJ-*SA?POk{@Ypal_B`1IiH%U A]ޭ=e|{]T_ `kmn19gJ-r_U M4.)G\Cp6gf_f1T)KS<[])w{e*s=DwG)60u69zoSjYaNU^&?,P5x cqg 1ѳD6G˞|5aIdiͲfg&ls=7i3l/ǔ!]-Cz}\` 1*`Ap@s?Zƴ"(Z\{`q1$E 5٬Ox]\rMZMA~1Pq/#A$ I,2ZV ۻ,>MؘlYjfFY [U-`Q9e(>Se2JuI ҙ.Ȯ&+]3TKbA%vIg=NUf5+YW*sY*siLWSU*3! 5Dt2p^zp%8M~Bŵ7W] ]̭m̥Q=\B\I;WȰb3p]en2p J1v2:W (y2`rMU2ѲCpsB \!rn;\!^!\θxpr3;n*@4.8ZDs| ,2y0W(. Pԗ*kVhap'oŤ~ C{3C?[H&΋?a2yRN(U7}!8nXVTT0"hai)<1 <0h~#OjgL"`, q6P |/FUO{b>K$^Wbm%&(AsCI(Px0tȢ X4]#b >1s EX4ȭgo֚|hi ?C~:]/O%_v0C7((+P'w%)_.̰tfWv3Z3F0 a.\e \ev \enMʜDWWHx K*3 3p+p׮2Jp 'ưН̮ \!2p J2 r+i;\e.eo FRD̰鎫٥3KS8KN{zpݥdX3\w2`R^!\bx Uư❁̮f]̭i=\!럇zzEpW뤢||>tgE]B'o 9N.I:9fV%V!bE` +,X(.DH&E+X5թDGn菫!8..7,*}@"os>ð~,/<)Sܯ+T]7u5I(e/ 7Oc$hG?7:m yٛaC~1<*[dxBO'. FU{MogCɿ[&,W6yVO^3u]o6<%UwSg}Ǘ}.xm`.tuk4sb)`k\=~7Ɠ֋ۡmi  V[5j1bj&*UR4h+Ts|:#;~$J/h)"pt@LaH^}F711sbr͞yپ>Иg ˭PeB(3D2`\K D4\j>f O)Wck.3w÷x]wִyYMW|.ݹb@4:I] \BJ;UV{i`HHNPm*Hčs 5Ǣ꾂ÜOǚN7okڠ.?&PEYh*[NGrkfJ_7mt=nm&mW~BZ;?ll۝->M_׀\D5Z^8l!)a +z_LtMw5of]1X'PC9X$%TtЄ5ɥ$^kDCLf}5 8*kPZ^+p~T[?e 52`QS4J3-`uP)cFDPd Nr`x)[iom:pGz+Rcy>U6؍t;޾y!պNct/\ 4n.4.e˪u2yx3.׊^9R{D&g-/BT0B<nrT?Ua+c-(N;Iֱ5hgIRD#S&&ø O@46&DK4y- T0IR,{}e<o0Pq{,/1iQ={YU6V-mLP ^hpUzπ$#U!vi =0<0H@kV( ^I$P0-s4QZIiP߿&0-BM|Z-4CM>M&Z8b?A G&Z(3O Br׫ ^1C"VVjJ,hH OH<d: Q ==|mc S)* q˔IV!8+zi*`f+U'>1>*8B(m#x#C G2Me F87vm:% ~2Po28i1U 2.e?N>\jC6yF1Ԓ#$ny"p~zEV^p3~uz.C1_=?8IآΏLZ%9;PX'8ud9N넥tE]`g.,MPO8?s$n˄akKO_ʈL!q?\FUG᪴LHS1ԋ?PbS 7|zdO0vAѰKYWLR02T3^F/2]@>t.*/06ijgL3FPwI|l?߻YիF߳03TnBez2ERacK?Uhޮ^%?n8.pgQ{*~ ފb5-DJٴ/{&ZՄ%eY7˚h³{nܼ01ĢKL)"C TC2IFI:k874H)bH#jY_\p<fF$›J!b^FHI@>Y4e2BK+8w;7zY})fqp!՚ W79սYI~V*>4Z[nI}a} ;OObis$qnu efQx1M&#ygwㄷY'LПFIԞG&*鴏QHHh6m T |)sQSH##iYm\D"F24C46Α}+4mQX|ך>s2^)&-5Ny{͖MbCfwͶlK|KgE($/ E,pg<Т!JsƓ 7NF,e&Rr68sxkXU޺q'ӥĂgJFɃLW޵q#20Xn[E!$`7 b@`E[iQݯ,k(c`nYͮ*X*h 2H%U9mx㶛 cP̸oiL|8ÌA}~a)(b(+C łi$ Bo6hMJ t&gga8]IHQWz! 0f)FRj1[8ğȃT-P{Aj+Hq , r: y47I`b)#tƠF1Gw;H׹sU{~i_h6giOoM 8E5=\L`ɘQXpT3aF?Mg-<9̑׊y9_#<9sѭ3́6KF()BNkAN O3VgtŽE|Uɘ/bDph}bGw`^*'7_񼝤W9~{\yƳ>OVs)"?u3].bFk{ߢ^.o^x%^5iAڥ@[M_N&0v4Oh=75oa~uٻŅW785/s=ӓ3po/g}qW|=K#ĺX;wtn~k*oXd3{38||z'g'M֏QnuDףNG2|}>I)˨*Cy Gs?SSlyvh嚇0kF!:r.$ @Av3Q&܇lCJ?{YjMv%;48JIrGںH|R2l5mK'5Ն]θHk HAWdX7rZbk8׊p!#eIY`:n|m?Ä`T Ar> bbMeH0)zZM$Xz,X Lr&PetXnh!SB$O리iDT)ɶY(Q{X>ɌLGPŨBT` \6f(gڹy 60j͚7?7ɢc8ό) F5.&a&"M/kQf"R~BQ Ġ|.į0eM[d5V()42 j5Zivv-`7UJk@,™le !Mc6TXb i8(amOdl=wʢK=$a|`B R$If#ZiKғv˽0m5̢=5c i,|c*;eg_ր FcX8gC/aC.H,2oţxX/"8aK"sqoJ @,&lZ7 6Ye4 HC};+ "R)]aUj;dEhk>"5$3Cƨv;;fЌ:^3A(~faQ^|ok:ኳBCT#%MѤl Hb@$ {a$,ſ% !;YdRE31]3rvWzK! Xr^z OYѻ[\EM,ZUuqܣgBGAY+ttI'rVD`ѫ?ySzF՟GټifLߑ#fM>`85^G'*6.Ф3cFLaDՉŸ@P!P?O A(BG>4ŗ(,70N9c2/k# jkƤC,XPtS0%20&U*Хu2fl'PLv#m|Enz~+LEpITh4,w ,\#[Y_ r cp;t(cldH9à9v/Ь)4?Sh)2da1*hL`(K t t.R9HgfeVFήҔ[ -4-:hE$! Zb3rvL1WCj{sa;~ǻfC^]٩RZ{gVu@'Kd5YJ b+H'%mvcd !m"P)B <IA F),T)j,.wO@'糲.au+_2<;fs7*7{,KmfbNbV+'}eyE>ޞ ^/_Sy&/_|W^Sq—w>zG's/' r O8僨ȓ7DDe7rЌ墧 ͔QFH^rV&퍥Gsşv_E򅺌&֋H׍|. ]4Ж[v:մy0܈^7,s_;?Zx9E-.d<+#`oFsU},Fvxqtu~9Yʸy8+7ĩ#ob3G< pg _HoȸΧw!e%e}/W( P+2$gG1uZプA!u0Iy`{krR 5VUjj={Hla2e4RcN1HcH"rH]M[EW7kRyhk |u5$mm°ȺUޣ/Y[$rN&2e!tj5UlHPB"[@V2Ed",$^Y`C{uTk+rv8vogMz`$Nulܷ5 x6į9>Ī<9덃%k(M  h "hX V"X !B9 h|W.ȋ䢥bC X_)C)&zN:,Zkfl׌J3]،3 (ƺQuኢlˊd^Ni{ttk|jAVƕCmw,B`&sĈEJoS^mrATc/f%4RsL3L*d_ض2Lp%Qvԇ"gqabEk7AhU{GRMC1VK H"zT=i#'u35'-B4N)Y1!Xi12ɳa29E cH$$ӛ"QM>aiFv}x$SUIEҊ?ec(q7q|^G%d.Ml d*+2d(܆FYaۘ^>lՇ>gʺGPa5Y3:ȝ:~|QvKvF1~`mF̱upķ姅}.-]/VݻnٯHb[hXT[zW_$cQwvm+TFpURE*9{uv+OK,HzOh-ā9e[g4/ݍ/+RFmS4L&q[y}[DVCpM)SQ.MfS0gCgX8 CLWST( Sp4gEЫv-Te$U;m@:䣂AA-MGpJm/'ڲ ]?kmeM>bKh|;I@EZN* J6NU_mSP$t'RnT%K Ye19lX)Jʋ 9/ qb Ecbk96%Un ˕ZqoNݝmAt_b^zfJ(1lutIl9ɵOD-+'M%L'Zd8}"ݞ>]ְ~[G./Eڼzv[໏@0&k+ev3f`r6]+y%nz,ּ6\-e9u̾t/yw*/ӚgOVkgz.d6{x9ޯcuUW7\||&DjH< Kĥy\"uI>rMZxf}[w!>|dҤWB;SY~ջc{)2=Pdچq]|qv^!-z{>> R>.8Ppaģ# ɏ{^x; ;p9 fss /&3dB& QIM!LLc@#Ї1VJL*A۔ԹTuVhVa>S`_rnB=}^Ám;{ -=iW8 ^\bΉ+U;o3t/~f5@8ujȁm.]gFgnD~iq* c][PzM5)Yq:9f*jSP>fN`ʶOw gp,"L \\!j4mLA|ߘ@"hU2c󴟹xu}?ϗۆvwȵ;5]S'WuЁ;1z gb,W|n.s%}Ĥޗ>ad[Cd4<j!-~{a[ܱ9H|Q jvmkyݘSLwv5bZ-z TyА)gt> tш`~0MvD4M-~xd"& N18p\m%Xӡq\s`\mj;nd W4jVFV擯CÿX0_Gp징iw%i&5N_7/?]_]mK,1=ܴR[eytmv-N2 E.@7nj_zn*y1bZԓW)InpWM&Mm턫#U%]Gjf^aϾp5C઩nbW)WM0]5zJWM%#ĕaSJk'w ZcUS#{5_ \5T{Skpj*TQzCSJW^&|/jji`q#Ďp%  6ެj;o&*Yӄ#|K/%\L`c>K7CD[_@J2kZtâM*h$lLi(΄CTk _W ͉=w+n'Q4[ VrYF4۩MN؆xffu-@[~:V ׮ۼ؀M0n(z E-QϧtSe d> $S\z?E-{T7vê'gl#:tvb;I{;xsJS9vzo(W+Ywi8۱*떗l^n`V:t`Z^0Բ;JӚ[DEWM^pZ7v\J̈́#ZPJnpzKz;D=qE&eUk\ZF 6+;"L:"\yv' ~&np%j]J`(pê7 =2,|ʢ]]3y+nN R|pN~_FѾz7~>/ŨZviPwu~U;oj:뮑%ۛO8?L6ru*BۆhKP"c͟B:sOv=:~vl?V14x.mE[^٩%gwkj~]k^޾#{mQp3;ei/*b;5p#@j91{'*^uqQ ߷)نL 2] ӉDvݜm]%hrեa"2$6]N#l{򝾛v!Igbfa}_.ns?.46w_-RQE.˥HJy=W>:d eM';zvzzC$pUтB Q6c)(fjUv};;p^6RH񗽛ir&[bKk>[+bI7CUlC%')s y!zZT;hfjm)P픋|)Yj cNR h)F 1xFkʫu=Ҷ.?~t!.:[՘F(e)jHQ<)S,ySBQz/mRL C.S2iRUkf1I`Lţ#DIk!(r3$._6aEAr^s[cVS7HVK򐨱㬦G2 Va@ Av%4t!fH57] 8W(c)(|"(>$X !a@YPn 4#ec̣P X[<^8(9$YDb$ۃB6rϨZQ8ZC~@ K/޿qJ<=KC^`zH֨{k#ԭx3$m Cŷ.TBNU~d}*MMU;*@VT%gB]V3!Hb 5 do7}2"dw!YDZ<3`+tm X1wP:Cr۠m8h"! 9y]%@_! qDP0`G7DePEx%qS{T=+ڱ ," ! P.2@K,brՌGpa#Vq ߈Q0 tdXD;*0~ R DŽ^qa/^onnqQ ?v#KKĻ5P;{*ߡ8}yvKQ6}{KCHϥԭR_; x[Iqn luI;ouaquۤ yͻ>@?lǼ<ًڄS?,A4g#o+\lꚟ}O||{g[7'Ch݁fSq6nzYm0HL? ޚbg/:}F?U/n Ѩf@ 6s (bH'rK'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q- D%'?%;snL8<('@LڊH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 \'kئ@\;N ^s8X;ȝ@tEb'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qhN蓶-95t'،h(S'@^`I@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N8>Zo__W'.y)znP_ݹr}~77xemKfK[%7wtVKK0.}AyH fom\"]E bp hQlU薊A9pRqt(w lv}l/߭v꺿,m ]|%_\S-U[/ѯO>L9-<_xC?vx~.䓧4pm :z}Q7L.uˏf{Qkx,onsb^̇ݑJr237g[|u7~`Ma X⏿_FYkԗcs?l“zH>g?i{ٍ\UMcGYnpCFKdPz9 e>ڽTyЫtdURpMjbǝ|?q^L>rE3 ǭhGh0&NhT4ɻMa+<"s{gov6Kl7JܶYLgh_왲?*zoZonm^s>|Fwʭ9sN2Ƴç?ZD!ffɶBSiQj/4@6F+*4CW טV͝KBW +kL|Uҕ5VS; pnF]1/-]NW@i^$]qJos jtZZ"]AJ%6  4CW;dQ ]-m~r(bbi+@UZ ]kvA}+tVٹFjtsv+QW֚$ȰDbo_؛!p/M\]&OZ.t~m>A1w5^\W4'dpۙLejrGHl=Gv}:U$I)p#$:IOBiv֤ t>vRut:xfi2ބ^w㛭l 4Ͱ4 f1Ν)#,@6|ikpf4wbZjte)9g+>4њ٫+FIJjtEz+7CW V٫+F#CWo? >7O+5jQjYaX"]y nh& ]mzt(Z ]o pvnMatsPr*zH؛vZ ]1Z?{blt;sdRàg}Lz1A1iI!kUhIK$fnԳig@K>hOʶ5fm|1Pj%񿛊㺞ԩ *&[:j4HyODu4- ]}Vק`lgƝtU%&[2>4DlUl4z4(M/M!nZ+==rts4rFES7DWkos+Fi銢6DWLb ]1Z=]1J焮HW.:[+6%{仅'_b`N ]-|+֖+Ί(jtE)+K!`cB3tpm3 uat(IW1Y[ofR3th͝e K+):Zĝ@t2뤦d=Njgff'FI$w?#^~m7g*>߯=݉''v6 =d"t5I(َ"=v=+&6CW T+th];]1ʹM" ]= ]Y嬳 ѕKU}3tpmb;R ]-Hx3y'bzœ h=]1J#jt嬊N7DWT;{aZNWҕք]=~ipɷBWJjt'jdI5CW[@봛;]1ʙ'%tŰs _)sFܩNyh 4\ұf>͝erB i!`R3tT+th;]1J'rt6DWlThn;ԛi(IjtE.=ǙW+Lb.BW6̞rnw ]=]A\Ye+lۡ+)4P8硫`7o֗fk19Y=]:xVzsg*X}g,x*z2!irGTx5_^rH;Ϻyp`;Xj'|;wxW Џ{R)5$X7T[2\ZkX(LCIZ!bFʶBW$%UIٖq1Ԏ]kњobиHb6\O$>Y_O5! w6ܲ:o|7ow-mI#A :@q Z0QmL R,/߯zfġ(id36mf=U_UuWRPopv9-6 ͿŠG{.[1H+nKo7o]rQ 8DBX 5,pL`~"`Y`D)n_lSb5w^`SV^~ڼ,fF+69e\HepT[CHi#5Dq#W|g3녻 kC*("\gἸ'@;&Ո yTJpB&Sģ\0ЛV p&q.{~fv< h](RҦ$5z@h/U \KR!KIzjsՋ}K;J{yRlY-+:+;)j4ƽlP>0(^760EG),1x`3ei"'(F=hEF+0P-W!F!Hc" F9)ꝧS) Β300Bp;5[GzY'wd.NlLnnvcPˎL/0/x"+4.,=TV5'6Xİ Wx|1Gz!@G%cQQ*UK21\R\VeGy$$ όxO`Pm|Sk,opM–P11.Q{!)>kѡ}8{9E}GvAlL/YFIaDQX ]&Z@B8"T L^jJwxcMy?s%ڀflDKx;fOXNB)N`qPKN]'u\6:/egK7AoY/-yigĜ~߄IdSG!$vZ$x3=_A=%oyF]:ݧ0Ƀ{XwjoPP>ԍKrP(ߋbtmR.ap?ey-]9dr[&,ۘb6Ss%׋YL9ΚkowkjLHZ9hߦЕbSgxnfvҵ}+3=G Vn|#͕k!M#]x_R0R ڞD+mۙo"kƿv8]5a6;;rSJ4>yxV--f#83UMq1-󔇿cb*tOBzn E7ZMM;h^ot#] l`{6|b´,|9-b91Am{~1lSޖbM11UM"we(vX̲g&X8 t'ޔ3Tsޯ@ <( D;]%+rS'3,!+*X\?:C.u&זɞKb$տ'c q{Cy~cU'F;wY8V?½ο?x<5$$>x9EmIsxy0 8Cs2LAqM -tQ.NV3̴Gr3%;/e_:Z!|`{v*ZLMjht$vyiyyT^/_bp&%t1OoM"oWHS0 #F>j#Tt7+0jsF2V+|FeF^aiE#-=I;SJ|!,ea cq"EC ;LuxP G' !(kVHj .F%ƈCN5&. :A0AxgKP}q)ZEqtoyyc 9)C&gBcpy{I{aq{MI/W ;5Ί^ ` Dd{8#pƓSIl^J2,)y(xwb|CB& +^4ugU L+`ZKІit҆ 'A0hJ5jB7{% >m Lʺ۠=q6yt7‡o^$O`d0+9g\(4H!% l48JP(DO!S$YAGEWW01-p=0=Gj.ortZ~^r ֝߬%O) jax2 nG(l|`xIoUԌjˊ²Q^'nF_~x]'|vF yV4tE9't1Tx_4JH]qB y0JN;8<3?:g'0E{oQ@JQ1*/u@LĚxV,C0mCj- d箆EؾYҫqh4.· Vskdsi./WeIh6_/N|ԯZQurЧk#j[y'~n|_xvo:f[Qy R̙PNӤ|7nfn, ħvo!]= ٓ{:uwӼak7U6tV`l:ZNt<+g{edwAv5VE,YZAXjH؎l^`0K CzBf]D5is{1_޾x/go>{-e?ghq3Fᤋ3eb :) tŇgmYr̮+4]=iX=˯ (gJyY BKbP##ք>~T_]cˮt[1ا_;f=U,6ocyZHwPԅk=[Bv[u_^1}C+72hV8ϴ5D2=OX ,ux4Q"7cOذ֞ύC#W ƗE2 u2N/{:ٛL<0'rsH@2\MJGA,-n1˰,Cg?!FzOA7+f_+6:Τ+lP:XօJ™IL>қI&E灼|):hkI G$:)aAQ0rFs81E0[h22Jz0b \4׷;;wsC)QDh_.naR~J_ʎ3(k4[L`:68(^}&IZ 57=zG=Xԛ`&B(E`OPϛPAF2hl>(oiGq,=S$eXF 6J!D~z;{کǭ\@ֹb$U +]@ !D^C%c GS.I#=p3˿D΄V8tL9(42s :_>FDwC}tzsbrqҍe,2DJ %6zo5jt$=KZI8G/qXe}Ylz{å|ĝ֫㯾$ S ϙ+%LO`eNH3/&pL40nPw9;*r3`{ޙ ;m%q#"6ZveLZKtw oyO>%%W8VPZ+p^_.n|VG* %U@C6DjBpS}4 :|x;zC5vT^IwQMLt,o6tȿj08]zu0ÿ=7)Jdܰ<_d 7yy!&S9!Z:*1; ӫ@Ǩ1QTQQdQ#z :hJiGi+F1EDKeNk=q))\Hț6 R]) cpeLH{VԷc;w;F7Q x{v})'h>&霠F|&G b_=saV ЍgwmmW~NuxI_ffA;j;#)Ig:~@Ivd'DR\:TBU .OyAZ%5x <H8D똕8S\^)È)Czʟ,^u dcT uꌥxQY s̞Y߷C}) 6WTj9D eDomajC:#nceN)@\Y] 'ze9G( |P ~k&b61*pi5J#5((c!jKuDW*5d¾U\%[1ʗ Kt9aݚ*f$q0Ƹm"Qg &a .x~ru]}?}S`:N"{x,/,_ 1q6)R~\s_=p39f!Z9lg17T;Ŀr3{()HT<)_N??~T `BA@JLͮe56+{RX hKQEcV֩ ):aBgM#_{]hTٗ  Ll:pQ &S2Ʌ@uEzˑ:CJa-P MX@VtdH$, bl9|-,[SU^7=9awռpZ)9Y'j_1ŃjH\ztʇ7R R}ཋ*ЊYL ^SV XIJ&fRI\@}b_\I.,j#d 2 cBaMZ;1UqR=c7q{~X/'BJ/>Jm!냫rŗEl~OlvyvX}MBLȦMksXB^lBl*y_h\*(oZ. "N#{De"DUX \|Zn/q{l-9M;F7y j#X-Ǫ +QP޴JtR-|Q؜*չ9ju>L4dAXt%R)XpScTzt?\bGT~ug'8y}U ٚlᥙ W pq&UoJ:V9A N RW(W+ ܸ8U>hˡЋ!uYӀ4dCnQȫ&ac\cuvӒS'O"yFqJ&(U](!tD'x8M;N?<\cV؎_jmnoOO|߲F!3g}~>]M!0!EUAЮ(Y;_iyʛ_ެRE[GfG)YBkWxklTGdO靽;,d_]a2{қUK*_+>/FÇKo"ɵaA^;{X8.xO{r g>5͕|d{ĕܯBy<b.)}éǎmud8O[| /X1)&qѣWP~6+as7;~?B >uh;b :dlۍٚ1[Ӊ^D DHb1ђv)RNEEwNDFNU$%sLS6D3kUDtI ;r?ND9aqъ>n-; ;f#غ YtUφ[9f%;Pdc. WI4hnf~S#-ٔk4`V4@jd uhWm*ublgjGFW:GT:6w9 *V#A a3!%бckraK!'r6wsKTM+TQY(cV{@/qY=&6Ng!-/%j_`h >&F|Fl`q],_h3~23kI!N>dƜL)<đ0:#™8 Cb+"Ӻ[]A@'F8IEbS:B;ҷYwM&6/!颉1k,WMƺ! 'QQuM_? [ծ[ ;xo&P)Z@̄W/WuPgoNm^<Qe9=!ĪkmF?Uaol„`&ewjtQiO||wnn}W?}8CS}NNمv%>q6|/#<xWn Sݷ5o>+w7K9eEJ!|կ?ۑQd=>W`ٯ߀Y{|.s˚~fwrƱ1cl6<=N[2[ 퓔o i]iY;]L o еV\{rj= u@Pkc P׷u-]JEXXu ޣ@."Zj!k#[yZ}ַ i˻פ'9_;?k9i;lLJ'6mOPBP)jh3d y,qG񖉻^;w:({nj7z9v<ti̐ rYZS=SU8CSSyF2λp.|r+gc]y5q7Fbg?z,Eh2. fmt*v`g9!{+WkA 5k0d{v006FM`W (c7zlRnN_MG"we1ho]ᵸ+7vw%RBz 0+8qWM\g]5iCԖ L+k""ujU_ l"]5)c+tW>FׄzUiRcwWMJ'w *c5+@]j:s-IFjoR+tWQ54 ]ܵ&}I zjqy;kݪkY;ST| 5xǿ?{ƍnޮm hФXXkYr%٩~ϙcdK^fIQ'?s*gs8?%Aw2cu/E'n299% .5y/{t>aL'MWZ^W/7!|55m>IͮjVɛlSO%3Bl[ɇ]&ޯea#:}tf;_ G\uk4~;fh=]B2ĔK!Y^05X0M*Sыie*+L$]#'ًd4(>TRxG'ursK#w7414 pMiDUiQ 3iFUOW4GU"/?]ZNx QR3+N-,sBR4S/P؟i9*ڐ66.oLTm9ҕdB}_BV6$NlL k;PmTKW_3 QW5tKs+͉MRWXkBF5-^u+Dنڟ%]Υn V6nsBVٺh]$ʊԧY0jiL|Mΐ8 MN`F@13kD?U/;`~;r+Cˎ7v(y͆t%[ڵ)a2[%T$Tv2{ \׭."ۂrX0F7?cXHu7)<z %nt>9^m{Mt% />QͭKiߙqfPSs&ZiH>̑1/]KڼVYr;oŪ p㘆x{:/1fqc(DdXb#7*XQ"s)jg~+E7KaqmkW(ŻIqypﯬw" ?u!tCq _.56O߆B.Xf0٪/`iP~Ԑ-o*FU.[q٬Apx5{MY+/ܘ=9;\ş0٪9^7 /)T&AO p|trx ]{)<Ry/ f2{zcͩyP+YҸRov?BTj_/Q &G)feR̾YlN 8Y (~{ыWf&o|b2wOO)ѮՂ o&/)&qk *n$[\@*VPKx'-" T qUxq9OwTw J2B ,nE6%h)P.lF`ްꁛ)V+DaǷOZzg/:h ?ʟk`t~]iU= z3}3[z[H=me^O >Ro<R=4q?Ɠ]6K[< RZbI>X|"ij3ut4ЙHC4ggJ:mJ,w:$FA f[)ҴU'B 3rqptՏR}HꍧE+|&gy-4]Kx7E2NG PqL)bH A.;\Imbw6 I͚; D˿ea_-Hι3(B!uc5"۞r)^J'ujtwlSkE32z K8l*ٴF}C%lHzC'Uf}~$=Is+Znәr Jﭗ""S{s4f*Qpu,y6]1$^u0UmCeł>gZE ^aZxkC2hJ5D%-ܶ0(\agvzn6oRchq6'2tE&TD{ČdJD#e[?aJi#1%"w l;8: B-SL- ,z66u C`9kH)ǙF"f|fETpku]ZZuŤ\j:~Ie_?:v>~M}g`Z} u)T-=U=^08aDa@GO1nǷɅ˘LX`]!;rIvićɽZcFtWc4QB9akUֿ'@WطoHt*G'ЉHʤ jʔ'Gp[ϟOCPhCq0)ReЅg!^ƻ4Rc&S5dC ~R^|8 zHw՛<9>UFOuu˫",RXBLXc8*|d'HCa9~χ}TMZ~7(o'w} g?~{}甙󳿟[q%0ר ~-O(L*SqQH~79z2\fq-bAH[X[ido =ŞYXCImK9lNOUbU=_?pUގG5]w(L@lo"xw>ϛ),|@h ׭~[[tܺވ.yz_/H&/bunGx42ԎO%t%GZ=#QdtVWh!t8٪|1jRāUAHlaԞOs 8l d'vtAt@@YcC%eR:j|֗3.g:rdOL8C7gP;G.u~ϴƀU7}Nvf)L+]-oXd+=NS{ڎ,nX;h3 FN*s9-):79u3L P) F+v%UA j9hsOևX"B6+ Q}$%s3VMRDJ1`# R;ȍvco8k9ùfC䌄./LR]\p这4nڦUU;U&'ry1e5XcBPZ8WEsipthg/ PH&#D\r)J\}%S;(;qcTrK}pEUcM EFP:C )r6գ kVlg=붜l~Q_AssˎQ>1ˎ  >,F)+_m)+A G yQ*aN{P`Y k p-5blKmWmKZ i aZݠV>{It%l>òELDFu"8v΋BqȮ$ _H&S+*/aY/x"e"e&-F(YlIbNi+ﵯ EEDdm-͢ /z5&"wU>Ye*;#0/^i0S1؛ 9qH+KRƢ \U;|g1~uyH a3)$@ldGJ6@sWz$Tli@pp%!Y>˝ˍJGgadTͬbQdTL#5xA*e{VQy$bOau$2 ,{f+'p 1.xM(g <9 2a;G[6`5JYnۜtwO3:C Y_n% b/.#7E:~GGXVG n*+gE3SBa%wZ=Z_!z[NAowmOBBd,ں@V[2zOQ |^HPuܔnPyot%xrwsXUfTw)JXo!&)"a{.AFФi-^,BA,Xlk;9Ƅ&nCRCZ&@h%%v8Ќ,g"'W[+AnegmyM>WY:Ј` 1%sܙYyǛ'Ҭ В_S7<7y1Em\ o]n[89=ۛ|tJh3viǷL&cx7زWk18EsƏ7hon8Nղ7roX"7w\,YW˼y\Rݜ1dJԛ- dfo%*A5bm-%%SƐtly$t -D*_'6lE Ԋ܅\8ѻ>.a>FI`Y&&wlugFP~qmՌ>w(/IK#UJ4fį~muUaq[ǧc(xD/k6Tq*P^kyw|qou?Sv?--w}@q{JJٟE/ƕeeeIbG=K9/"/)"Zy?ʵ4t3r)c{vdm}\~\4h'7~Q۫'eyӣ_֑k*f@^mx{݁HKm-.l?;FBW<̈́/'^ĥH% Cp & ):; ::~$.E7s.E;FfF\fRbR!qtT,*Zr>RʑP(JSlW87p- 'fU+UY70)dHg֪r),3)tFsޞuYoIOLѵ w&މ.zJlի3[TD-.ٷJ-b &*B#߅F~IS36jk0[ A;rbPM買E9j6LGB1)ESl/H0lMBтُO^\T)k1k %ZRaXճߪ@Zja(/ќt, PVQsUMFƙ=;-IvΡ Q_ʢc3ұӒ9CfB[`rz8FMApjw8}9uS;$vC0|Tȹ>R.aYe\€w\˳jDbP\U9qM!ZK&sֹN Ev m m[ۂl gcZ=O>1qIZ&ONdhN(q {TUe Qg[C9*QO HQ &ZW#2##fSYA-kd M4P>BA%yƤf43QD ihV)SRdeeYb O:ưO~Dc--~bfSm[ "$@YVٰ.U-cɏAK@FDt}1{!ýn4^^}a\ _9 ѷ䟁" (`Rpޡ/`Ya0a?,hUE}+yR1$wHK@5 *FV1arvX]AM)F+_g#ft 8#DAvOeAL>iH! KAMAz"lLIhUW~;I1Vhc!CñSEbqAG Yȿqd >La0vpQʅdUw Y%G $CA"xhHąo S!@.QD{g p ceƱYﵱl)VL1UCu58Q! 63)l]Gm9CV H7~M[Q?H(Mp>}|;;}FvMcb AZE5\2nH: LJTaF%)7m?hX * RTs+Kg 07,]Ta@ܣ=Q#9뚖֠6v:;xr|8eJ6>8E&ƾ:k*hiڊ×׿ӮVZ"5dq]ko7+ t%i 0AAY#s6Ԓb[Z\rQa0.U"/=KK.$$S6ͳZB>W Me)|-gR3n@ji%|2p5Nɷʚmafcɴd\8Yd< !\f.|3]\29a0vwúlߚ tؕ}֛ơLtYR1+ TbnP,Z{Wʖiwv1{IvFX#i)f&VTPlUaunS^O''ͬ=+ؑwC|%v2X-Y°>IhkuxKcJND&bT6̈́( A9 Luiw“S;/N0dPF3qfă*oS d \|f}%hڜIݩˊ;iA"Mʈ;F'xk[#LY^pcm `x~\b28MCWgzwS,Jʋvb^3/μx( kݛ 4ZoFѨl"|\]s:"ߘxŧ‡ECṀa|?0{?/!G9x2ͪǻͫՏCԏ}yKnKKB/STm %r8eϴn-skK g^%}K,~9x<4v4Zxm%dֱΤ*2Py.[bБ.WR=އݸ{pD=T煜S/\)/i & @-Ǎ> ׋]JyVNlp첾h74)7~-TY2'gba# _j^/ͲY\؝(܄ZJDmHw@g/~tkM\/M؜][O?s*s& \-:|db/}Wzk\?[^VTkEHo[=1=:\/ōxӮ]v,%Ens?}z˚(3=o=,ȃ+a$?D _vpW/W/7W=l}M8=ɇ[s!\PW}svq;_pӕ 3]=G *] `..?l:] k:+ƿ9"Gc7뙮!]yM>u%p:O 3]=C 7>"xXJ:JPz?s`SDWUVܕ5PZ9_nv}TMW.HmӰltS$Q*lKZ"``/MPr]'"o|%W=g% ][j-JZ <. =rнkXCNjJzŇo7f}Bd#6U  s3{?YF âM5';73SљΑvWؒCI",=jeQΜ !i.:YG!xԓ]33Оܿ>oMա/7gx䬶_wp)I\#3tF rLΚꃕM" Rnk5hrʤC5P3Χ)9UMݫʩ<_tklC:no쇅-Zɷ*BE:V,I]evj3'BQ$@<%)JoE 6FKe$X,\X)jkPRyiJM#$B"Sm Tvn kD34fw#bzy8* B%WπpGDk3E2ޭ<mc6J\dz3huoaSE4P&sGWaF}Cȥ1MΆ>g<3F%)*DmT/  eZ׸6׆s5J`14k;i^,HU.hC2+nʧhl`SƢFnJ}(R&tALڲZnB )NMÂw_Kͺ#oG0L#mTȗBi#)K>#(<4:WnYAkah?vl۸PTOJVTbq*M/ %XY: V 9mL6v/…Ŭ AཀྵN)(J892r(XI[vnk-=()HRgd0[QɷzK/eJFjeT":,5SMȲpD d@۽B6Q\ZU53& ; nvߊq2c)"9Yh>q U(6/X58!bE9È7&~,k9oW^/ UA\ F=Fww ڦHB[o /"T YeV2){H,Tq2H <9u& {jN$\`2*!Q-l5d8ƟA[y1ρ 7DRD]+t@ azuH Hy*EVpE`I%h B;ےX>]C:x _3XT3c̶#h<~ 3Ȥ td͌W2n8XdЙ$ fj"(W.1A2?AjDaoؙ (p&L*BȺ`3A78ˤB DNs*kqM`̐Nڲh&&)HhPYI"uw6Rv꭪.> 2ߤvAVek W| VK `J^ ڪ.q3{re}ln6Yi'\g,lYGwH7ם5B'eKah0JvhIfxjЬyךBԦs4RVO ]Ęv@yu"[O*3bO&Vd E~~@'؍*sRNaۂLv0 3 )ɔHπOt0Czb1VLw`E^1H"6b]|r3pGa|?$oQ^ѭ b^8PE#GG+c`\:%XW![EڨlJ%cT- ikX6#7+WjcM bӪ %_6Mޙ4(d:\Fb9df'SӒ-V#O;z&D#SI^jlj k D U>N:jUl0hlpi =+Bjx)AHG=>\2'IA i8 NR=Itk 7 ox@.` 1jłra4DX%dǬO%UFTcg9A%P^;nz>GE*89~j:ռug{8W!vsȈ~^y bY,AD"Jr~3Ç$E#ۀ-Ӭ鮪_d0z=0Qή30O=p\R7Gm{91peaV8eʅ/2%, O'u̖dvk1p_ :K52A/pZ%m핗~kYsivˆۿPؓ{sbQRGTh f(Eʳ6]>+ tp^]6\,7E$|%gSS^jI!Ly9 ?**Enmq/^0n 1W"xHj3beP@"3X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X83X+ sDxM`a, ÕՀ9+ Bp<-;IAy vbebKxX ({.o̴'c_WCXGq.>3AdN˂aI XE E&&v壽VލoGxI#t b;ͅ@CmnΈ5'ߙ'k .I(0xZF('=X;Dvb`>wH=deXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeXXeX k kz`aS^ , FKdɯ#˥묉&P-s#ZQTk7O_[ᇮ LYn/CmB!  -.5iێHƾbk*D? O8he>ேM?1< n2l1vt/vff|1 \ Nي5禃b 4L%xX.T& D\X0WZQm*e\Eg5X`V>Kӫ%>yN-Аz}OfbGeΰUTOМ#ֽ۶\V}z4|A󥷳 L]*^e3X-#<(;NAծ coFB9z_mKlʆx%\0'aZ[/~g  ,#8&%W$qo^n508wKE3 */cZo4ORb뭠)cۉ6!.QvkަHm=ۯ6,bYt NQtٶBmow @_Z8~%At+JAt+JAt+JAt+JAt+JAt+JAt+JAt+JAt+JAt+JAt+JAtT:xZ/~4^?G)t?\$?|Bew߫ {ܓ^San 0\"_Ma-; 0J.sa/0rq G2yaͨXy9jG `RG SbV0 rQ ͅ@>L6, 'W2;Nض~Koru5"H}.zףJ5p,OYN YA59GQy$RGZ?R@sEu\h5HDJx[x.a qXESk\ y[mJWa1q2ۀdr9?هx:p2Z^(ψ啼>AQ鑗 c7*(zSƎ!B1+T&g؃ IJFxxabx XTiGd39{gpK"ERmL")V*)P^ @TYoI2B4%nJ:'rUZl&j<->OMWx< ^whbJ ;PJ2&E-F񌧳 VՓ5ϗyBP<' le (TLLM&FjfΚ͓uㆉ%UP述xRGv-d!N.08*BKHs e*K}gR JX-2хᐌ$kn` 4LAvd/uē7E9vaqruL[!ե1HE-UVT$-U:c"2Y?]W'az5|P8 ʋMꜟ~HK!H&(ֽ 3:ktFU*9Θjd2 W3V^\#O:|+6RJ'[MXӇ{dK<16}(W|>([aQPgjGbo9kPdnEqb1K]֗0t=EzD~?O).q8Oh2%Ѹxz[edrƓIj^e5%s5Vo1uC(|ן?ЖbS1_ͪl;ZS|Ra@d fw\5#(%WDDS{뛇r f:eE}oZsy<FkIt^Ob\Ί- 8357?iS3))̻x!4j5JǩCQͻmM 0nmn>hP jZ=ދTGA6SyY*+ɵ ]Êēy0ˊEd#GnvܬS#!7]@ls 6) ^=k {)w?BDQK,JT�C'yCup(>,E u8˫BV5l` z"h7af2@JD@1Fɚ+a8ņ3an^@ϋ2̮'{jVxNm{||,tcvɅ)WlM~:+SX[Z|(ƓP'֡xRDHjjX}֢HQLcR(dEP;R9qĔ6x=#,zѥ/5vP T ~'4Ȁr4KRXB$ I9Z;㽡.K:0➙ Ά%oM(qJ>d R`ԅ,@*rMDaI(Oݜ~ckyzM$^:;aX^!8{z0u"v]a'I{*PpRH0ExShLݱ)pe Tqy6 t8 ).xapKV0owV*GghVeaq30CT2&R#UjkUz?W?*`'2zNE369/1DՋMa#LϖfTi5{uZ_oogV 5!Z 9Y3ƣ_[L00zb't iFnVY#q`Nǣ|el^ %zm{W4ZZp{KH][x2ƱF޸1CtQ<Ϛ`T?o޿ߏo?} u>v ̂MmL=߶l1*/KgƷӡ*^/i}EHq=? મ):({!~Zb|iy0!?e`Gu57*Mםu|~/Kuѯ Q4@ =nT/p_( ޚ~0wm'HyQ+܂{EX"Bhoi*] ^1 O%W-f3ʥC 8'YQtɞÂP$/֘{fIWvJk=trsPIFĎIFg]$7}Nv>mg1)-i;^:1rˤ0kwi;`fdkRtn L@Y(/Xr9o9EgpH&EgDZ|):[+9Vmǃ^X&`:fֱa*[b2KEaL  +fi0K"*́7xt' cpHGgtArZeṀIQ5YP*C"r %pE&eMڿAH+8?2R4<0 m*vSXPLڟgx ש'lı1b4-lwJ٥Q~u3_(T(xÿZb #A2*G˻s Nf]1[%ap6HFjRz\TknlǦ]g{ nϾu*\94_gzȕ_^d1,0s\^v6,2D|$9b[_bIMEcfSWuMsӇwY]# 8#GG>.Ly]9kșN!] QugID#$z$r  gDA=Ypu.e @D g)$H IIcbUžR%GdlbtuBEsB_AS՝5((QPÛ}N}jjt{.3]Rnн.?ԛYARYtATE|g'a-08XO.1ಱ܀\lg>ƌ}fT"@2I炖 sҢT&/,14mub*jL\H9MkP5ێTDlma=#f#w=ZnD]Z58mϙb̡׍|! ]k Z|na^M~Zi&> 6jsvrٟkWJ/_&+U}0)&<~޶'̍|K*Nq=[@h闫@+Ӿj}XE:`;|u>e\1fKb}/aP0;W BI*༓Q'BD?XtPSr(Q)ԑRK,UD Pشhkuc9UCzmMZ\·Yc(Eֆq.Bd-Cc2 %K6Z~Ҳ)w#oJ2e,$NiAh(:'0lE#wlEyMA4㓧zƁzXίj;hX{|C2'q3On2i^lX8z4ʇ_@<ZEQ!쬌Ek`%`҈@BQ V3~-|Q< -0qJQȄ@p:HiXTYI֌9afgL˯\cuRI^OemlԐedUDl"XHL~b۰\jRBǨjP"dZ)Om'>%"fmEagtnF="]%Ba%_D Ų1F'6pR dl9Mqib ) Ύu ɪ0YFdzBfևQ_)W8Q6ֈrԈFI82F]Nvi0lT(FPF[/<&+F !RSȳ9`8UIMlGg"f+ΘIH* sX#~8#E!:qSnGh cq*J`5")7664TKoG sR^Q/C/E6㎧CX>tT6*lGu2G7k!GmQ}*Hp}`F?"v6䮀yuџWooQLhX|\ % ~y}$_i__kp]+9}:[7+W\3C.x;F& 8w~̓ 7NFzU(R(2D!FJ"#6*)%Y2pA;?=) 'DD $:u-ʔVXP0rFpK9^h6ȟ_V^OvՃ°dvyΛNͱXIb6T<GyS+] աɲYcdKnϢEO~agz^U8e{˳|.4O;Ib[#C7{uDn1<;⹞t%Gt߻=ǶpxxMU19[X\"Nx0DF4Um9<q[ x'H`v5jeН'nO1Tn eԀF~?>'Z1{( ќ4RʈTjuD@juL$E3 ka}?uZ󈴢բZ^ΦR`[=&,%6""rzDc,Vd@b)"۶ Ƥ *xW̙a @(}0`dKY Fr^Z4#U> CKJftV1f|h,%){B1CflP>mT.kX4¦,q.GQ9!3 1!2bH9(@âr3FHn4$K2^Bq.*BRJ0lyMRNɀšGs)9#;}Cfw ݁'rj%X/wh$2 cWuQ [RqPd9PnO S0p'MPeE! PjlOe x Av6p-4Cg7<刱{K4FFYr`!AQd&"]DQ XGB4OEoj Cջn@̈WOWY G{M/yZkǎYM^vW-2St|ֺzJ|kX_()7"د]F^ſnx__»oc!hD)ƢlxfFX{%o)zOV iJ Z&*D6wW-fLN{}k2vDZ'BZ{'ycl%͙6Nʹ$h 4dL.lC3 r1T|^Ah^Iq,u|O|'ðx7!uDžcGʳt0UV֘@Z].-1RE-RyPg$lBaꮼ4h&ͪcpK;un2חҚs:lڶ5g >&vy 񮝅jUT&1f7!::ɀNO?Ipv8;}KS|MӘ0J `c,XH-Ң99#S]QYuu &{7^Ru73߰']Lng"՘e^VqK&vDqr|c6j& !$b {;% oxUy5ۧ/y{87O1CMƓ/M<O& 0h؇(EL-=U>z2*{՟^68PuI>ܺ:t5V|{#/?E3{5XqÜ}?;Tޟy_`xg{ۻh:#>z37qX.EЃkuEe|7YBxyқ?cfˈ4U6Bƴe}Ľ+EW<#CW\ Ѫ֫+DiDGWgHWZFiFt9e$B¶]!]  ]!\r+@j;]!JHW))IFts̆)P2B]#]qjk#Ww!;'Un(px4ɻQ5p^+l_KHmxoSN(m5֌ۜbk^׃kKQ۹)|pf&W aFNذSUxg7Dc?Ԭ]?m?zJ$h->5Kko5mnE6}W$f%40V0уJw{_s3GէBY܄U34*FBӀV0vFBv4}4͸|! [ ]\AU.thh;]!J9FK]qn 4EWCWA P ;:Cg kStp5υm}s+ZDWwp ]!])IXV s+kL.thm}0(9 JKɈ26++H6}W>D5|H%XNte> U٨+D{l}8]!J۩s+˩ '_{m)a/U.xe&8Tl0 Q cVw(ۯ VKHO?`ۑE6ʽU}2Zf,݃i&+$tHT^z ,yAjI]-VZfWBےs_gUa], \֕10!ddH}G4PBuD,xD{G4Kv"IzI.SnTN !I#JѮ lOWk;ڧ)Q֦ U%ۣ/p cb=i.@DؐlhZ MZDiQ 4Ȝ ]!\r+@h QNU#]qD [ ]\FQWVT3+c:#BFgCW7`} Q.4pE64hUiQj4Z]`Y6tpυm+DYgĴ h7dCWWge+DUGWgHW MFtM> µ:' J9ҕdfEWX磮n>C YBi Q2ҕLs])Ήʧ 2.r+@˕i;]ʶЕP,#l J ]!ZeNW҈ΐ FDWr ]!\r+D~Bvtutڵ Zv,bXBsm橂}\0h̦˵o*u,7ݦ,9n\ӜRD47!Dk[_ (M""eճ "ҧu~vNnh͉vBiHFt:ڷ)JNc\Je_> v($h[ M\suMvC+YiQj4]!`a+Dڴe]!]^lÎ&\ ѲwJH˙ɈɧlA@+H3+O]U*+{:wBˈl;]!JF::CRVYSW;Ψ]YMɅmADEGWgHW!sRWX ]!Z`Qm^,BdCWW\ Ѷ+ښ ZؖNKݬ6nвA MT-bz*|n"8.\3*(/qj`[v V6S?^O/ Ho%.{u2DZ2?b5-_ntRtIM'eGn8ϯUk_>닾ǟ*Onzw͡ZCZɌ|5ZP+ju--gvan@O}2pTt97Ž cRqc( Dh,PGK +xk"&D(Vzcb,Oݟp'|zh > Fp(>K@*uuo9 М:<VOWn+{8Ƥ|F[=guS[>rłpw8g~V7?Gϣzac#W3ͮ]+;?65k)^UA{mv`Sپ+ûOں9~!nr Zr8j aQ?QAWAWh80w w>P??9yx~վCZ}xTvP Z9[xb g7wb7w >'5_*<gb Rvb>(>S:xʹyI ?ڷZKxPnP2ݧɠR.]U/&(\^~I{PªUr]ll1?Qg//(-E9gkϝT׆jH9h- J֤eNj QDL4qo tYR)J*+Ti^Nk5o@f^/6{R7Aîl狕mwVv[xNHij6%hzYUx6t. {ό+ʒTv:n{?T(潓Q%O@8+=𭯮S~u"TXs5&bkbNSNS)(`W(PIH #RPDpW݁*KIn]R[ T)J% 5ƄyL+Mx6ym;`bȺxWny#Q7ys_vO׋TeO˰;YK5]Pe.(S._5j%6y➅ RGƃF|5ո{+؊7P0Z|tDY۱)Y1.Tve6+{FtfYBE) Y,,NMp,NgIgL:iZ3M+t$Ґk R)e)C :d XUP,xQiKl ٌtD}2@Mq('d$Eˆufcpיs^n4O7 W+JaWN?:z"q`}i:׬MHUT]0u!b:ڤ^[U~A pm1=47|(HT t?;κ,ǝ,~fEv)Iŏ('SwlNnp `'Zy7m[X0wSz*n>)f|?*jŁ?+W27C7J)pZ0é4g,`2' +Hފ2QEʐ}JY Cs&!qcF<՞Jbtt[cpoK5/h:jv\虡 MAtOngS%TVRQqQc͹Ӎ-n4}b@m@cܹ- v];jlۄGkB9k'DL)^y&hJ5ܕ>E)xPLlŽ3Ir :~0EP75316Nv!̉B\ɣfaF 8X xOJΝ)閲° Je4dss95{N^ o@^X6ǚ' ^ccI'LG'MZyR,&hւq:aڣ<5ҲƑ;XojaJ?z]v W.Q=9l|~y2/*4垜̗VyS}_iΦI29߿_?}/7=88˩kvMذ\NUsR[/~F.Ru]x3?>!-௤2-CqZp T.&im`rgCm߈'[l3t^6bKZFcjEwbq)GA{Nklh+u6o/nQW?z0[GOG:[XCGV!E]$͊DqD G*#Su66Kr!Yq;{;(X0>V \B$.{e db 20kؔ,pFN/w:|v~#vz|7ag[[uqYb@.7;Ov);]x6mevET0̢'|c 3D"Q).:!J{;t `[ ds,,&'Mb=օ'%)@/'#s"jd3 @Y7"78s}Ma8We萔׋f˴Κ״gۣ#_d| TQV$4 1&dS̽GPl #ƹW#+.#M,PN( VӓRV4"x&*@eDh(e'MD + Ʉ%BPAv)b(x"zΖ|6mYDxn͢f F"e>3T`B%O4x!F2 u 4ȡ/%W@Z=cK0֤P7 y,w؞~F2Zf?Ng;&MۤY5y (-n'k䅜MPR7~|:؇CJFgnVqR(hiLzKq> [Fމ,ɏw"!It\ӚliH>nW聻[+;ݬM:fO_[g<$ӎ@+K(X A>i,2*FW(x<ӳCG%k<9)MRdV+0ZmuP>  E`GӋp=J{5"has}Q>+잢rGeA0y8`7[*Z*J7dA--ބ<-8T0ФMu`=>xeUy^5d0:. %{ѰjKQҖhqNiQLboF2x(tKQ7ĶԪBDn6ZG$Z礥Ԫį*?[V.{%LwucTnMCuou o(>Cn3ư[jVz [;.Ɓ>Naq|~'Ji1ys5Ts\Wufq8Ǵ#{زMs>ު٘ü'J{όVfV&駕4جo>lZ3JsTtlЪ}*@g܆cU|$X]T(\3Z_ vŴӯ嚢_'чeczϰSǚXcH U_2`(sV]A .X-5lJ|hB7xd멍gwH#.prDC .n)J5]{ҘZha=RYuyO5$1G2C `t#ʀ`jHאԃ,V!]JBſ71#c 1攥 S@g!1XRʊ k kI$y"1*\=F2rV&(8jIIwMa|64'iϛ68#VUIb ;RT@#ldNƺ1e!?QdMW5-bB!q9z-)IKA#5$ &YM$Ee)`8&~Bf-B*> g5ł.e$9WHLX/!TuK҇A,W-a4YO!Xb!t͒')P5 )TFd0l|H0ta09<6 ;ͩG _qe.UFs  HpOc\]z2YHq bc&  P*Y^ G3(33#MQj>"X-3X5D kłu2j&$r:ѴџiRp8ږO Rfh#!{KXYUf +,"l#LC)'DrQ = ̘B8HY @l |)[Qh+ϐjf CBL| uUr ^YYFë1@@%YXk13Fېl/4 g_/67mVBҗ +1] w=0B9gē]iMcjpVzilui04bl3&:c Tc]*窬k}ʺ^tok pEU!'/J!4)5^AM1_&a"a*d`7""9>=5%(=C X6-rHJᎸưJ`yW5 ~NzχsVr^㈎^0N?=:UY4p1G3{Ǖ$tЍ F.DՈ¿K"!.?$ A"VkkeO>)KSjFyDpPxksH[+6$!(*)Hc3tD+Hl۹{|b'@As/6#y!s!L5W,؏ zAAA4<5&*Dp;I8# Fߚ|vCaY}}ԘhLh2I rvҁTZC#MQq,*8$CZl:M;t̕ gdm{_uI9XD1d!WDBiR\lɕO"HV-X1wmX+]0q˸XI5ɤ&77M*}i^DۢĦ(a\d v=8 8CoS[<6 ['BE-=vzG<ރﵳ>~%ja \*H\Ԭ+ktmw*hġLΎ_"Kt N#ƶQ*hE"{1ԕTx[I1i#*U:q0QF:`]LW~;C E37*ld4 o3n!@FQ}"Uo<%mTh_ʧ5x*͵+C`rvz0[`, 21T&hLh TI _+V%iw5I&Aw^d}@⣴B?&΄*Ы472{ @C!:@"aUluv:J<0eŞx+mЕ b+Qq:]#'X<`rlS95|!=]ZǾ:ko],X,~|\XHB=ǛxK86_aj)!;J$UbF%CDvn#,snl\Y:W5qo3uR(y`c59_渏a,qP 6"EƼE'6Z] Uujm,قμ[Ћ?2?$oނj55 gӗi\ PdVN/ȑחW料J?Rxti2aY(J ^y9Gu#rr'?۴O3a2Koe,vwNң_$P`ИAၣU;`e?4&[4nmX';{?T{Q ɤA$d W*^K n gf\n-://S5$g44=v~5懽nۢV0`2~:[kdxyw"6`mN!(<5ߚ6cM[M)uH8dzGnNh.憘ĔoP~tď|r;o|=о[]}8]wi;-z{.oȍu<~,u\et3C*:]O9Ia<&egڎ3&9hi2 لE <cN<=,ӇE8 W$lpr5J:H%W'+%9Bc]\Ju\Jp=NWZoHlpr-ƺ"RwWR:E\ n]Z|HORdĕpyF"B+R j' z\ ]ڋWaoj jzQxK1gVCn<͇q>YlW01JiOFu&|I}֠qf%f%\1GuE1ixu-w l,h i+:\ѝWg2Z$QdU錂2Z+JNWgOi/]VNL2g6UFq֨*ZPA]~7,+>KW| Y) %xKN|]E`Tn=m_O=z亖U;tq 2v5`} h&#DzhP2: '%h>vz8gR965cȸj'm|;Hj:vڷ9F7#!HoX61~7(X,C_ER@ .__("i9Y~G# ;B7ݗ60 ,#J`4u.JZ'T)M*)}~*P~qEr5WV:H=NWh g3 `WָTndpKW$x."SvRW'+@qP XWVTjqQ<'|AkM.BT~qeUxNsW(X| \ R+;?wE*'qf)| |I]T_nvrRietTpa\G:V:B7"6 JyTrqe"'g Hf J.+R)z\"ތ|pr?aO=wTsW'+:Njc]@Ǐj/)]6dWavdf3,W5(CZU2zp}%rT8)㈣|0JGJ9vj͑:ST<£ѽGos6u|J`35n\;hZ[muFFiTU.&vӤLĴ.W(!U.\Zǻ+Tٹ => $(!sPvrWVuWRW'+irHl`Rlq*7=NW`*#\/uErU6c뭫SĕLꜜAsH%sW+m@vW$dc]Zg+T6equ:2i VXW$Wg3wEjmqE*7:\Ym7WN #'qY9lI#)^'Ǘ, bzM,2AJ-k^Bt^@Y,oߜ~<:}l|H Nd6)Ja_4ߜZk|*5SU:wKF EO|au8efXWiLXXW@3%ط?V5~ۢܛ"|kKy4hR~V-PӬ\pCݔ0ޔ c|#.lq\sfYZ$,Om/ayzkB}5enwnl6S\Q K+M%UY'Ż4/y6ArPT "iLccn` /R1ŤVKS||qtu഼v]>i\1R+\׍1R 7NsRp1M2˹ZOn{6e1kZղk9լ}fx|\mlgET/7z9I_HjL \GnVX@ U.ɬj'i \d㇑\)rHsRiwxfwz{Μ :^+vjBJ؂NWվU9gJ`Fuv|Jfi,$:Sk]1*k}zL?_PEYP;RJS,Ҽ(~FK+?˄ś޼yl۠?ͽKlb^"M*jY͟cP_|j:`,ISŷ/?Mp@a2_2vI5fnGl>bcY'4Z:Aש/H .5W>[.vx¡nV+齃u>޻}o։2[X7W5|ћ__QuJ?/,;TF NGll8Z ~q?G.s|2WBG%fti7ߒs7?E~*ݼīY?.Pz\lmG]:qmeD:8ВăE+H0S;}T9e#CӢ%PHUh'U֧*U*MT{S c-~kHovV[!"\E+H٤RY,i``BԨ&)T0nJk}k5x " cks-X(iRa5.u2NfrbjQOud:Z|ޭ-5+|BJMU6,`Rym&5!zIL"])'17:{`tFbh!SW'GK8F}o6wZcVB\`v==( oR!TR2玥D>C~BȥaUV{cv,ur>:x!?f]Fl<ߤ|J!C*R1^(s.lbqM_b|jqsIQUP܍S5ԺVDC*)IV:ѝ(sRBLxhVvƹ$6)hK 1įZ)DNX^D6G;TCJ0@Dƃ_xH,tXimMrzE5"zҼXt):Q} `]&|J'{X@䦤gE¥:juL:RȮB&Qw_KͲ#oG L3R/3~ F Uck\4@)5>#Hp"P DŽ2,4ҧMJ)4{P\!iZTUFB>L< !'qcGe&5%t_8gP4I"i (C Y0⸱Ԇqb%$ AcJA&ح.4H#38pG}Ƣ2YЩ֏DC}^jpۆjbPb$Sƪb~8?o%}wy4S*#h6FPע= Rb:EjC$*KtPK|̡cLFuu (( (۽XRr,a3m>%TD;뒄 X$u0]Zx _3H[vRhŊ޲N+}Ƣ Aa9mʀGص7Fu& EFiv%&PTT Q8TDYUQõ=ƢPaRB]$1dlDH!(ƆrĪM`k>Zl?vvH'YLg#NGhTf%ݢZMUр{9 j ߤn%BG iT_$5ZR0 z0dkRqc=A;dټ /b ͦU&ɒ8j fU:9 4IҢJ$v'k'QEŪcԶaZSQ5(U/ yHY=yp4vM@ƤE̯WFfĞT2j7% xKd]ж+9^PnDDh>?\zҠD+Y.T`=@ePˌ`*1#K[Ƞ rwc-iÖ T k7"($'Mm2H9M57ZHm' +U &ezBE5Ff,(*Fb1;z@U4hp?{DmDVc-S{1}.,8S'5֤Y 6)j`%HI22KCpR 9Ok3Dv*UaLM%ajlJ؝5f-*m@ fxXBVp*,]QZ4p䋞P+Bbx~)H Gk|Tp('=fMW:PX1\b%*kE!⸱ކ`ݤ>jEr7vid" J.ȎYxoBE!:CJ,=uRՒ-0 YZSs)QO_ =]w9ۣ^te_ݞͫzsp8FE-8~KdѦ#*y78Ι(N߼~yŪyp5MQi{ !=/|{t}Kgq J9UbA'nfHsa~Vr}~|XAo5Ng\J?痧''Rc445֯+{.]t ‡0mm={g0z6a$}&u E+7I S# v[.wwmSFHɷz'(`Z]-7롉fJIlp dyIb}:[e~<:9Gs׭4Н714D7;1яO9d#"hmo;HcWt=pw&9OİjZM^ΝIc쿧 ?L>=iHX$>}X 3P b}㿶:ߎ>ŸҙnMxtM@zzϝ8>ȉQFrb)Ow~bR?p;[m]yN|_*I$~؋bK–Qli(r,X9,n>(-w0}'3/tjdZ8`o `Z8yy-`%p Q= [8q 'n-pN‰[8q 'n-pN‰[8q 'n-pN‰[8q 'n-pN‰[8q 'n-pN‰[8q 'n-pr[8E:$% p@07Q+`%+^6Vd%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V*D҈Rū=%U`jVJZJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%U造>$% dg2%*<{%4@/Q @b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X @(;\ʨ^"\d08,FgWRjp[)o>,0{J8a9ӻV֫oM6epַ= EofNG߾^u:ۨ47@{@` OloΛΛ4}nlswENFe;nvZl6ca?YX@8:ݻWd?~3 ىeDZE_u쇶n{iVOgVֳ<G۸eէ [tcY6\, M__7.;rBr\kݿmwʎg^ @grߜ);]ÔS)‰8}5{Ҋc:FrH5mBѿ6DaI1;i\;a~׶~}~u.?Eg mB_G^A:m ׄQΗV $kde5_ tކwjgkY<b/VJӫb؝ò.ӲջiK6\jI;msrpc`ϳղ}jǞr9 еE/;/{ցe{0?m]O@Pyg^WGt׶^-47Iy2ͽ[rXI(p`6S}zgy3Xm8-V? }-.De`]@Ÿ "4h2n,ypX6_ W@뙀|F)Kg*>Ϩm\wݎF(}ŻhyT}8?KA B\X!ȮlRp*`x`Zi1g`ֲp;FU*} ּib-}&Uϧ ƹ ,WX-g?vGbc7}q»BR|X~g'E?٫2f eΩ(JS$biFp-Hj@U&nFRLtg9*s+`L|\i设x)hͺ| :OC59B`"N0UZ\LUUV r}sqsb5:KnttW@6aA|M_?m@nnOfuMpMk_ƲԖm# k8\u=о0?CYכ[s_V/ ޛ1Uܚ Qtl .i^&/6][t쵲2a(a^!("+&:vF6^pr>r ڿg`)@,.rD,믔+]kQԨad31 2!JI^ffRz[o7j.ͺiA*?Ȁr4Po8A \Z95Xk Öoᘤѷz Fo[Ά:\(&bK3B4wa:1{#WC,"ު!Ǻ4-:.7y9g10O<ȎAM >*Jh4)GADkc&=sdbrOǜp?Q+1kY7rUy j)TК+f`(rV0(ZK ^/-HmCa}J|^ڸi03HaJ>ϿU\^1&q- NUo? s{yҌ2J Owmr1_ Oӫ~ Nb-S{6iz0NN('n¿n᜸#OZ?QEp-l+'77Q+?x};,?8RY'sepY;[1F|7ajƑi<ҍTIXև8Oi)> g=Y9L-N7JQgliiua!EɾO/ 44&v»'P'97Rz#`⒧:d_HEꞟ}pU8 &=WvQ-X1ö<ؐ~`%p[Cxb K2d\iu\Hu>E r. &s9ҌaLߒkRPM)̔,cF95i l:פ5i˹|IRbiʄvF‡X. ^"*@@E)85VcwOA8% W>8 Tu=q.  D4ayTͼ ˠh$0rIKD$2 ΑC*挌-YkِώZ)v!Tx|,}81;n`D1K0xЀĵHC4Ph`6xXM;&;)7-KژDM^ ;\|d $@sZA-7vzR'\%aPD< w)J(wfvrGye헊z̈1XXk(QP0 גThGx {1h0*>n 7,÷E@-V E 2WϝrB>}W"  qsXPe^tƃ>{+(#aE:myKvLCr52`H6yr0ߴ0xb]JYo;@XZ)Buhd&g4'W{3 !C~C9W*V S}Vq4DhAkB%aR:;.47Z8+𹫂*tLPYmg1tx=[oZmx%UJ~zF}n셱6J+aB; ŬtSa< $^$B ÷#T@vDI13wV8; El≸(-k@{Z1$IN5@ ywYt设\v@ݔӬU~fTjDC8@ p ŨH;-# (h\$В!+<wQtkk2vkW0A-&J[ƽFTL:-ҵqr0ab|R,iMycZsk44RXM˥HDPN2Q1xè~Jt -"$#yR{A뷔%?F0,z.yϔz,Uq53Õ u{|cM]JmKh|syەnpgeyVQm=` JFc&3eb߈rjzo!QM-֛x4[ٺ_/#[n,Qc? nRqФ%Ѹ>O׹+d5W>n/DyˎSbZ*=={jE` QGfv<]o4V[5 4\gCʗMv;:iNik6frS(պj^C>o3ܕV˖ <%70Y`q'ohv?1ocXd&yIm5r'N&&цՠlk("\zԺS]U JP^&{N$Z e9V*TκъzrL8+տsVxFa&O> dN"#Nh@)Qs% pf7zD.~̎*_kerj>6+r ^VqFyvE)#?J2$XKW>&\GN, (ȠL(3f ,6:CB..KfG,lَuUA+-Ձr`]\ⰔbBpT!#R.]_GT19SZcy6KYCK]9?{WƑd _%)]gacvaeԣC%"'ju-UUETD_6E|e-Ytcb*˗K,+7LH 焧:M`;:;[^ZH %Cgg sr0B#$Qe &v g}vrV]2oJ4$O?&-Jl9t^ECҜ좕L3I>Z;k'\ɫe;nWцm(p'b#g]or²4!w_[qM-6,^^wڣO;țwg/j9:ht<]`?:\x F@eD&KO=lHOjZ/.ʸ(\pqƣD#l@05eM0. ap<⡪{~xΑ[O3($IpC$k ,#g?~ ?ediqJLaPiϴ̷wi`(\1yzkrR C~n0 '!a4!aK<*)0搜l Be2gFtО6,a?/,/[RZhz\ѤSj fmlr6 tn M}Y&;ŻLKDϢD*;M^ss?GЮ=Iq}FYux,]5Iz` ҩٌ\,܇A[AUpe~i)߯D>R_x{7j?bʎZՀ!jIB îBx/U gּ(Y/1ų{,߸}joR/w뒖GNӗ ׇ]k kS(ZJpŭOciyt{Wl߼[GONi7=;gڹF 噯YJ-G{^:S \|ˉeW:]06\r-JzS#od6Qpmf=RMH&ZƣYMW;h$&AlJdYx<@6pq}$q* SIٻvtGD8㿳1ڽBg"`j>Ypd)$Y$ƭSQżKĜmtYI0`]XiC,ǔgZʥxZ١rVJ/&gK?pk8$g4KM.a\\V%=G28%1atMQZB06$#3ϒ[RPfŔ,JLhx*阄Ƞ-3c5] 'KRJe aAr9Xif#-֤Bl[U"tkJ%cnKEtH)>p.pn)K`M̧Ȱf;Heqު/)V1ٗqf>X $ x ec09T-->~ i`c=i 4e`~ak:ܷ*k}Vm G J-ӗ*cT6Bid.:-m7ji[ i,o }*..pteeӓ״G72ӣ8MM'/ ]o—tF<}IҾ[sJJ>.Z4$YDT쬝ojɫ| qK m('ׂ g]ovy/4!wf4|#ZlX6\׫nS{i#4Nj1E-GG]u:f~9ϖ+Ǔџ^4꺝]N]Kgi9z+ڋic׋s)tpylzx_{7,VmцDoъ5Qk06FJpV7LJRbfp 5ۭW6:}>][\ë/*_g-ϻ#|[(?}{񵭹ov͖UH߮ br!\5{o[`\horܝlqCɥhL v69OKXlZ,Ӡ* Py?wX P9ޔ fmPeQ1Mu}Sb=yU疞r$e1BthLxFl:g ˈ`KRqZu]htsҍc}LmY#fon-uj ?PGz_ϺDR{>TIWq!g DJrbt`}*K㎔{KW=JWѽHWltK2pf0UTt9stX4Mp^̐\%s}J\QWs \|z\^̇\&9ׄ*&(#QL~Ji&$`ɤqz*i&EZ}O3)R^bLM܂yBpU=V"-}"\=CdO \S"->HivʔNO H`UYސWEZ=*RPsgʜ.gK־̍D/*y9/AuV#?&th77SfH ]&FK'bjwه 9/B FUj:?pS!jyt5]VJ|zqXaj׶R^&$,6(hp*YNwC3T]RP_htEZd<k4M}hHJphGc4w\2!LSL&ᇓQ NOY _GoI!v1_d17/7ef"c]'_bű^ݶZQnç%]ms/ƿLcH ߚ4OڎشCz['"8X`}Rkχ0{VffTfyR*Vf-Rk=^~uxb)7F?y^50`Q0خ1wv~*ݼ⇳ilxWo-gh㧍J qF(]q6(]5\PJLQTV ( QZQxN|HeU֪UW4:J# s>pUK;*s}*e 6pRp^ɳ+%`Wpz,-]U)•3+*s6pUŵgî슥trx |Uם{vUzpM (UW \UiWUʵ\J|*s*-Rz;k+ZL|%Jo4k(0t]7$8`3kH9"Yfʊ %5Mh Hq*`)\>yVB4a:yDt0))? Dɇ6#-}h0 Y]|M^~^,XKy N>_7sۦŴED7W/AxɪN{+>d^Ԓ#]FN=>-McoFkcUVԺ"RtҔ'M&!kq[Rׂ&HxHFE /S7a/}K\7i}9`*u7 ?~? %-)ݯ>M>"%t!ыeuLϩHU'XkV7{Pj^Op_=7_XԍͷPÇK?Lÿ?JygG21-ʡ׋/F/a[y7)ԟї?\UtG%_z+*O1nKz]04> )&tĂ4i~#Og~zZc.^OYM5׽.Ja59fhƨB 9]QtHxH tޅNj<[C@SO 8F'3H{؜c'lnt^^ǔ|CkeVw8u0v_s=;3kPFWJD`(D]=DZ!Fm@BKQ"ҔȽ% ȜK.`ٮNXQIy'@D gx-kA!{ }ƞg! ()mx3ysT&c~_0K< z|iUAϦ2d>_]} $H  (o V$Ł=S'Z7t=;,3D 3YQ){$ Z&JBS$HX!*PQ2%` %Fʡhr^E*oU,[[Bcؙ8b}"x7N{^sSj{}JԄev{iKWB)g ց>=Q0jԐN( m5mbL\\AAh:`FS"mRZ"YCM <_[ƂIaɁF)T%X;g?ʳz`ފmH8O|+Ej.̚] m\DYpǘ-~پ g$H骵༓Q'BD6*<$P84ޫS9lMQ(09S#Y"+RI7Ѻl^{BX-݃]ݻ- `|=GUc)Uֆq.Bd-ȧǔ eK6nZiYO7Q*ɔ!FC9֮٭ô}?/:L橮?|O|y/NVk/[1;?ק7g{p};f2%Z<(&o[qű#UKk2[.{ĸtʻ_|Dk3k5=Dh-Eb5sH!@N"ÛRR1 #WB&,dAJê 52v&nd쎫t*.b!XX2Elz44nv,-^n?~'4jj<#6Yکl7.kc,h$"2ǎEJgN GF]ũ^ %tJ Ŵ-S\aN/ |oKE$]#w%nĎkZ31ڝ^jaśP|(j"Ss j6B=&dkNjSS<,fh$0 bf(褳ca2Y&đ1?̜ FuvНٍQ?1 }ADZ(;FD9  V|NQ]"hEt(, SDYm huRۑLX_lSbT; I#bgF/ud\5 ꓯUr,.qQ8%ũ(a ER@-j60)q |.>.;[qx {|o#wz5x͎h6b;nR?Q6\Y?/fK5{Х ^䆙`2:a4waHoҠJE E(d]IJJ ' Da9`pBdˠ*DN$D@6s(SbZRbQ#@: V`aM9^E9ݵ<Ss,fmlz1Ui2_JS/ y0 uVt|/HF5hlbLm͠[\#h.瓤x|di38mg_@kj6c2GLޭG=|(}5e~~ vBkNWJ|RRV]܎J >FՊ֨lu~iSx*Oc*wweo/}S;MvX[.MAYYʼnve эKrCQd9tw[EMXuotMfzSzc|vgg7Ϸzc9]Wn ?'|kgn_G>k~0s1J`Mki.u2&5&D=6smt`\XR5D#-xFUf p LE&(w!`F`B}`ڈEʢM@"$dX9TADO-`E3&$(4/aJI|H'NjiEU*@T9e1} O)?DC ϒNN_3kk(X}`TYm A Mw3%rdYzAt6Cms 9Pj!&WbE"!ASӨ|C~:S2ʳMm6@&(SK.1Oqk]CmV?}^gލ~޸oFF;A.U$ Y’&ePf.E(hBաI_ojXR>p6[irwE-<]F_.u6W7/'Pn)oMnp'_7p*c9uW#<}CDoGfm|`yQ?6)yO;ϗsg}Azx7VVq׸|lvvTDT>JS=]z.>W:)}R)>6Dl&dF>fi < ؞@%%ϛDJ \Y:ڀ `Hrtt)Dpݖ7F*MHڇL;}tyrDfwm_{H}E]x ;~5>%FP_ #w%GT>}J,u0uQP&( RZg.Fϟ}.QćG A1\HSAiVMׁcsŴpoy ^!leoKgycVvޫ-NUC*xf6 u(L7Fejה2IX&pIOF 'u^ٿOuQtW]4W qeR5.tJw +cpμr33ĉ GmeJ22jQv8 )B`JRK 2VrcT2"በ)Sg`vAv3jF,rȔ(D`"RSF@7%ZH0<@1u2LBM{=ADcD K,أTx;<jI p1le^%~58;=|Ľ^uw6gQ)[!LqMO~Hlvf'I*IQN)MtZs"#󫊊XgS/>4z [cR;ĬW逰 ߜ  $=Wt  VXhW6lz&8ab_Yo(i831>2);~8=7(j@*C2+0Ir$#T 8H}0>+&)ߧH+_g^-K{ծ-BL!9Vr6ѧFV 4 #8$]gl,YI0"'<ƀ>$%%k79 Q Ƶ5r_ U<ɻ/:V0e쟟6F$^7V TDcE/ݿpk_u2VzK>)Btoktzo5D\!zk04\`D$4c;)Oo־r=ccEմX8x D@)\+p[Qm*"p]+dd$ms՛B8/A_30\6z\tJk#Aa'RL*G1z7eVPF`co+Bov^MaiPV<=[Cktm<~C/lF?քԑf>2""9ĨfeNC_%} EPlD}G+KU[Qx>>([Kuo y%QXx2խVҰ~H^Ȣ/:hX6CC PԶf׷XߞȖRB<-ї]|jξv{IsDނlD{e aĕ"kZ1 ?@ KX絹ߥ\1E0G+q@  h FdDL״\ )rTCe#rKINe:0C?(QFaXTnk|RTK+KU2YLڤ.^s{ԇ]z2b..bIm᚝KCw$5 ^˒Ad =NL쇂'_U<8 ǯatk?O8:'ߨ[&ПE Rri}vZq)V6ɷCɏ~^=7zڪ?.78,ǟ{Yv 3d 9(E0MY>v2hx|˻?|*ޕg:KKCsRMO@&秗_pǭ];-}Oyke7?M~WAbm\7 Why7ioAҥ.Z [F|9GXk1+jֶ2W%2o؏P2|Mo`|s3IU|L/LBe\ 渃hwh6ziI-U=O콨r~Ogox:-DM.v`n6V]%`qa?^#Z֣[#gˌ iz@W6DI尥̗!݆C_^/7JY3aYt^ ,执V;Bt ($/4V &hc&ݳ`+Wx.,6nm۵F:R")9IE$s2JaQ0pZI%VAyZ-YACcPaJwUϿXEFG:R Ye;EVS 9ͥYz3t?ִB)(%G ;cJP>#TR i ()Jjiʤ⊙tD9YQgO+.-waKoTx{l>5>]<5=m4&ʘn(,aaz?/|[^2YW_!ztR;T+mk/ D/ŋMos'.`[^߄Uy: ? ջΫ7EuU`qA̕AY3bɆOYQ;`!4C#1yaH0J7*C 4T!KƏz1W]/G%Ql]h94z!P>M\:b)>٨<>_vJTORv3_;weqH ,i  >O[rIn`KRIȲ24`Lf0 2`zUÛ7Rћͯ?3οȱVGo">,JsWPZfǓy!.\mf/VhʾI!$]=<tR~2e?$>˝_/HQ]wjMͫ;R y-Efi!^ Vv)L'^f`[O8&GM$dfp &0uAYGKAI=&LZ"GJ W,>-8\Bc#`v q텐QM1ILMcDpfN'{:L'\"&R9(Gi2eVQe.t4+U: .y7.[ksن< b 21DЧV S'óu%i#P `W{QTbZm waj9c2  e )eUǾAX-G5J\>9#Ϟ痓\ݔ+?LIXL-'fꤸC 9IFP1 ^eCT0kkdDvf=ԑv\`cd=R1E`}P7e%I޲RSC)f#-5E*9KBPNQ mbT ْ1,s=߱u&ΖvֿU;sJ=ZAdgIa#8>'$zFsY/3q4.MSKY kyl*BL:ȨkP@ʞFbюT3/4I{Y)3Dx4<]H rp3ɠJq#pt,J'Ͳv ΀*K=$Yalht,bK%e!%efmxfu"²0zLE5>laȠM4d+@p_Ⱥ1Gw"cM<=/љkرuS\8?<-|nC|Ke @Yvr07  \eġUpUD3+8\8 \Ui WUJ#\=AbF X`D1+HwR"\9-^[xq2LBR8./<^2',9yN$ί 9.poE%9;3Rgx |ՑEj.! %mzo1+{X Bsq~6-;՝~^b-Gj!8b#d<1|e-~qǿ|h֡L|*M-K 4n*8He,l@ \SAQV8ǵ6] Ke G5HM$RSǐy4U`hX\'P<*4}hjhFm9Ꙩ٧fkzdM\pvl{KWjޠ\[!.٪E2e̟ϫZŅXr}f^뛻Z8 / 5 5 Ț1KDځ׫biПg\JMkָJLy=Ywi2˵@E*ivFD /[_{Ī3eM\>d6!_hɮ3ku-{Pu'ΜTŕr(sRVA*$HW, \Uq`(#;\U)H"\i ,q8pUŵv(pҪcn8kGFz:pʐ ; \Ui;\U)qWF:1 b`ઊka(pUuwb)5WO4 b]UiJ9Փ+k!9,0 VqWUZ#WUJ;:OEr@p[U fJKdR0W_V~ss>NYVg6f NK՝w =Nl3@4b.Ʉ/T!jT\ޘȡ }4Κ?v(R|AAU},Ņ7di\TWo%OW;&O߁b[5\ .ۓl3|Rx ,x*5N8$ؘ(z33_Gܸ >\ EcÝGtTC]nS$IfH׀@E9bqtH+ !K:%K#Tb^>2I ) !גJEJ![Ih^ 7c&S:1)rayBʾdU ;rg<#o9̫~=PӓpuisS|S75k7}dvr ;)+h9+ФK }`} Tޏ؉2_x֗՝ľ5u5~s ^, q5"ӥ{k\w978[ojrzX4|rnqǖb?iv̿jjd{mԃ~m?Pm[vAֲ k") -([ %#(!2a6eVFڨ !3V]^ i!OuHlr3)/(ȳ(ޞۆrl~~p*?7mlޫv5A&lg{Zw/\ݹx_1tz5[wa܀klGHLKiS`ֵM u}Sb puӥ!p-KBG9jɣ5a^1ɬK2t)D4#KV7\}4NÛ)Tt㙃Խ1j3~ח˟Tzsz_S-fGv^t-ֳ;mSodzӻԥX㜙cdu߀1WXV;^٧'&Mٻtqk_o, u;e]wG[%<<}Y;GOH8֞|,WUt{Ӕ&n#mwy핣yם00TwK;;ݴ[/ =NB`&B;0XeWI` SI&.M%J&%!SI=DeVbJΰ@K / 5 5,RJȴ́zˑ51] V>|4CGB$(sJX1K$ !qڹ CX6 i8N2w๪cz9>]\wq+*}* xQI$[A$O'Gb AUA&#`D B۫#0M.*$dJ1h@aPT@"S%kDɎV[B }B#g'CiL>"\Y|,Z$Cmxz)ZDfa8i ZlQYdD,?"FxN/ޥ|Bψ<䜒ҩU`K:cW6b(Xm|dr<]L޿ux;cV_MAH5-9 𝊤Ia*JVߙ_/mO~a7En?  =$Nk$<,Ur/uՀ7m/4Z[_R9ԕ'd JI+t.pZ>b` 7Aa|wߧЄHG>l,ْdKb1Tu4Zc-QBrBBeXJ2"MY^ƣH'^O~ͳEԿ}IS&vr?~58gQ/65R̩&/3Ӥ$Ig~YzCkvߙf~m6 8;[Vzցll:VAZlSfN۔]-j՛[t{6%e`,XAX8 G5E2<F} )hj$11["gz{ (W;7e+K\>8l4Wە\s{SGr\ԛ;6e=lryw/ =]7toV@ Ņk/Ul F@}1p X.n[b,U8Ɍn8~1='nNsC"h A)O<,>)gvEi){| EVyOLu ' ToF^Y=MNo5OTa4)΢i!"%b]|z=gGkk*hB Lho} 9OW?QLgm'?|isoz5/6EAWg77EFnAĒQgK.cD%f壿LFQ xyه9_|;]񿧳Lʯ?vYh]qm~=9c/_??g[? apxv7Rg G//zm .ƃ#1V۴.Я|ɗ>kҏn[lqW}?+Zζ2T3ٕ[]%EliS:6OLnAh[ j{f }^msw3m`qƻ.?{_;ȶ"?p.n4& eLXHGy^ꇗ'wP4k{'ƽBWC ?bmD..BP@6F 儝8<.d:Cxq[6ڿFp*G] 9.R`=H7^ye a8|zXF.;:\?*;,Ʈh^ۓ[_ŇztR;+]ါkW&& L׾&}%uqeFMnMk2QZy^x:m. fkQN[s>i^Yn+b)Yx o>lI# ~MðqUTy x8h8.ۅ._g&g7QInu\d94ۡ*$䮳(iZj{y|[ qⷺ]-??;x鳗甙o_>;?pfrM uk~5 +J+ŏO43tWQ& nR?oř#}}?-BeF[,g ُrTWCx܈]t 2r%Ÿ"V,İbt(T/ ʅ|ؑ2rcu`_#ѻ_rP@Z& E癶Hf+&*2U$`JfޣtOƆ|<$CT=vHPb\&_(|r!idML3r2 't2Z=Ɉﹱdvmڤ0n;TPWٚ{|.ԭz> !.r0WgI`yF׀T1~"cX>9 sw%/c1cGI`Ls]D) iGפ-b!9פ=a*\PJ%5ŕ I `F#OtE 8m.hdTBTN*W8xK#2@Xԉ|>vJs*p]]sy!)a/ԘׅY/0de\*7M.f\N$ V2ax10/ 2AeGjH!i͑r;,$E"4G, NχPAF2hl6g#q8SPDdFhϔ&, f-*$&mBshot8;Y+,$[5=>k eDpquD"A!!:‘$0D„V8tL9(dg?|FDqC}:lpԓzҵゃ=$"K,V  !+nN)Z$sz dL 5Ho#"F3J`H_mrjfzV-Qy FsdLwߴQ,cϴt\b0o;jNJf /#γ$U ^أVsFߗV[8b׉ ^)Pra 虶oS9KD\$:C2Υ :$Iik*Ur"@=*WƄļi O] Fs`t )mxSyu/MOZ M(<#-a ѧ<ϋy^?YL+Ѥ\Fn:G'ɣ\6'\>^.F̝/|ג{\_^|{hni4A \ʶC8BG*#-e:+,'*Q=n.]> yC}uypՁ[GMӫrq(z'T mżzO'qQwk2-m|f6 5O/Z7 ^mv.h>YsFG-et]],kQ ˏ{Ӂ֦PgđRK(}#,w@2Gn¿ҳ1[ ; e MHs 9/D+ R( 1,QEx2L0ϣ0TLGpGm`Q*g8P ĸ\YX`o$!xׄqa)A%ISXP2yTY΀gql $p`AqD)3pnr߀',mr-\r?{qNQGCQ߁f9ObиA*sYkNvPDJ/E!*$zT$GkT(%;RJP(0RUgjyi酥\SqT8T^ebo^Pcp[*PHV'S.ȝ^ Bel@ݽ5-Wf&.:H9^%oJ BHjJT) ڲ\)*dd8~4XG"ьcU^F51cB BLlKaC:Ó}ԛR?M.:ᐵ^{}]2'gձƐ8m)F78~~)W٢4 "IqaPJZ@`ZQ8'(8ScC2cAc&`b!`:|P&%4DEwJ$UdR]s3v't ;Ìx! Jc^hNy<0DĉȬl R6lRh o5K (q5YՠDxwEVoVr,#gB ՉɆ[!FB~U!}4`rhrqѬ&WIFɡ\sQN\؈ $+Ā"qkkG7Q>4H~[B8f&.K`PyxC/Mю?-nEFCy_H(YO'MُG~jُ߆lpmcAKRg!j4Ǒl9i95O!,mtCPR(}`MǤZe.fJsxke,sTʓX#co0mF ٶRm>=l-X5WU};짥)I/vFY3\t}ѕڋy\/PdV?sGf;Z0^^n3ҴMϢQQtM{9zeyIz Sslp>|O J3m"f@ʭGY ߣ322W5W/-7 NвA"& d)w{|w;Lt}G9*mqS;ޠ<~k므 ŖJL#EڤjTE=(t&Rƪ=K8Es\{w͌; Ya #k]3JѸxuA9gmi "6 %g^_O[-,kOzFHRk]i5*iE2d)bA=h,bIA)_{`F<ͱ(]'^h%!l-P ,D:{iL(i&o󏒾t.Ej~SJ`}gvM+ {,<3h #Tcv^SO<@VnBb*} yHA-o}( _j9kqkn6ӄ |\` >1 NŨdh>K4yd*$p2e1s)ST-tQ,JFt >#U뚈W+|G}~O8|* gJpEjU4rqeV +\3r-sԖ"zrLj+븓"\LU++T-"+Ri&gpc ['vrUy}H-wT#䰵9d]2l2V"M!y{NZfst:f6;>틫\YNNi@Qi׼Oo0p԰t_^-nbm ɉ8nޘUFTTѮ Mn.vϨj؈׼-~=ƿV[{oMt8V߼mV=o2Wm"sM5t6ózs9rn:%y {5FIgj":;~V'K6u wgɦge0ʺ5R(L|jBRÇk `u5 ɕx4V/&OWѨ=^ ~W#'RđpO*,zJM:9W߿nlJm~}^˴ͪxtVLW5l!Z NOw<۷҄9v5k9qºl`"J`]IPԾ|JJ)'JҒij2*QP\\k/]|\J'\Wd"\`ͪ5\Z(&NO!4%j4Yb\Ղ+Tk-Wr;xpeaU}@W(sW HK=|\J;YWcĕLJW(XzAqIvSRoPM@HjH˓{UgԊCR W#ĕQP0$2\UM"5ŇIƈ+5$=!)]XoޝP:HbWchɪH'vErm5 3[:HV̄લY5 lj*.|gg1vʵ|^,L ?WnX;l='X|a\Uѐ`]O5TАZW36y4_Gczs´c'KkT?~r͑SuG6JQZpuhsnVcy糘wR ~5hYףF0+4 Vi+l-&JiR'L([=dz\ k~*M!nYEBP"JԂ+R{]o3jBXW(*Q HV ±Rɘp5\i+- `ĮHƺ" J, W+c"\`qIԚ+RiqeAlW$WZpEjMbqP [jAR{ZBĕlos@ٳk3SI`)Jm=>ʵD@R+&jJzOgٷGU?~/G?U?puhsn ŝ'܌S5+';KzI\E&FViku-&NiTi80=BL gW$A-BqE*L!mo* U++qIԄJ35W(ذzbv$WZpEje TzqeѦ&gɪZ[:H12+•e1U H&NjuZg/\ W#plM (XzB$WVj'Z+RīqT+GIIrԖj'UMz\IwUK0~ݳ4kIC՜4ԫgL=[Ia}zY-.M\yR'[%V?ة‘Žjv}=<_y`\ht`m蚶D ŔOgKy{kn>xw;N1-CŔɺy/~[MūQ;3elٚCqS뗳y0XU7CeA(@ܩ||>c18F a:W':s-NB]`Cyoe%{Q_&uQeX|B Z^J܀L^$7)#YXoטm-) gz0Tsty2Rk;;^؉g4= kGddd[iPB㘦 LƚLLjbsXb4b%ϣ70&D9Úhe TM&DyUr7,F&A9NrSdW!V&rz%$աZ֫u!NoLnLU()?"g?0f=r(Wҗ ౙ@fh[ȘWW(:&nHyZ {/T49}ߞGCSh袜]7ryd;b։ DxʻLq1h-ķ&]pHr6.Z J =TjՇL%~Y`,S^q9z,6^00<;3`{d;;0npѡ It X#Һ5 { buhP-c޽%Q\f);\Db?3n\-*U'6܀mN`6ph  d^ɔ DpP\{B MW4-*Tf07<4`%Un%ٕ'P3z3|Wv e C[@0&XyePFeBEAy230fM>R·j# XYHXuawLƃckY sbK@4zU\Mt&'ĬA0ཹ(J8?P/R̲ *E{e̟h_u$: C{w*Zq&.^UuA V4ZE29:RF~k!!1\Akk-ΆŷG6lM#FZMiB(rXg}Brr,Vԥ&Ac,yDDa;2_9`L|cioE.k.͹ G/x acGf.0mf`-$,>&|@uP<`" n;Dv*Y\#$]+Tuq2I <)x0_*XX芺aFR|&QL&zZgt^۰`T-N5ZzπyILPcj $k\uM&ANtm9puqU% 95-ّQw <p5/Q$Q a%&_Edj4K VmKе2"YwPKIrdU6r}\pGYAaʠv?a-%|m9Gյbhǎ@HA ף;Pj^vem)U5/h+VM-Yb9,\5č\cw0XtY$ fza2M>rd~ȃ!:88"vx-},Mʰ*MS cE9 Ye3 ZN7' Krtՙt4DFv@ pfam@;Or뭛ޫ."y,efU$e |v6I`Fn >оzl:=n 2ރ/ŲM{}~u9973)^D BGLt[W4z1Q6`H(xbZhXuLkiVV` e0۽A_E^ Aߴȳ)5x7lJxKtÄp\;9P./;t:0ѭ)Z5t*eX`0drE = >dJyHoݮ'f   XW,H J}' 7A#ɣz7ņ*XV1 /TCaآqČ3l T],kP~x"mL31\ǀt mbpaVÇ܁Xv=%VKm\U'/,P˩ՆXM:Iy`ËH*8ĕn v KkbC\~A+{=F9`Tr^}:[nnφƟ ,x,N|I_ǁ|p|runzYٲd%ɭ!^/}qU-v?\Jv޽9nBNtsԟ^n式˛/ "o頷8 盛7g}G642Dw) |OZc=>lޛ|[p0D~7sq?˛ ;)?S&f1\&-B>!U2,@i;I ͛y6"В9J:$—I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$7 $eK(}Npf> VfwI 2:$P`؎I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&g %8$I Z= $(|,&N( $&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mit@Pl) ~CI a3I M@@&N1  mX@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 IIh}B|h}۽Rʨww_ߞߵw7O 崥[NpI?hpKV fJnc+A U6ަ!`fJ ] pt%(W:Er+ջ Ywg˾.^+pB.KLԭ~5.P}.]/.1lrW 'گ?\6_x͟vׯ߼Ww]W+?h|}B.12/ cm,yg˙ K#ݳG䞹>ퟏCǡ g~D?u鉌! i+t%h;] JW'IWOݫOnޕ"'Qg??5-w;Bgޝv?ڽ}il^`q~/di0?d# ѴۡiM&nNӂ Ҵ#x!!on܌;]I+d]0\֒?v.)] ]yKػA'ѕM (JW'HWߔ`h3t%pfJr:vA]"]EJGlmNWQN3wTs py3AKJP:tutǷ~cW`o-s$_gcӋ wX?/?]^]ߜOϣZpm,^)_훋i }\xg1`Y'l#~gp/x.L^yVƑʙ;~e9h1O8%%>2{)+k+rȹGOVav/9z^ZJnq\-%hp!oZ{׋%kh|ғyiP쇇 8#8=;Q'tuZoYz"y'h,~Kː !FkѴe M `=DiiS] `7CW7ĭЕ}3D ҕcyL fJ>Gest%(S:A[qܒ`Oq3t%pfԕ7R$sN9n8o BW6c+A U 8Е+AұӕJWHWDnCt] o+Bp@F!`ћfv&EAgFz8=I#ؒL wĕYLjZwwRZwݕv$hT{W$Nl>PJ]=Ew)frA]-xugNjxA! )A֤93K?IJO.Dv}KՇiae&+ܺccyspͶ !}vtg2D-An'_?LSy&uw ²uOMjw&`MqWIZEuDnWqW)@`UcKjJڣ$⭻zIN9m7@Ƹ$.MqWIZ!J +.A we)*IeUR)+MBW 0xƸ$nHhUSO]Ia rW 01*Cd$MxJ)tVA`}˅5&!I+j2uWO]iJMO갆)`*\ió>B` GdP= Bm'\Sa#-,aARkڔ[j׫h#Ԝ^J7]g ­6殤YUp!V4={:,gru?{6%&0i0;<ߌk}x?q0yʗb1X} Z6C_Qz1:$T4E19{vkpݮrc!PkRTbs#>wX 4wwC-z6йIޝl/|`-ֶ_;_rggg@2f*7翤yIFLg qQ[<\.W؆ċG6o*Xo֛~C,Ѻ5xMm8|?d3#HĎ(ﮚȲǒHzχA{| aM`mE1`ٸ5#Qea *R)3Bp 6Euo&0_2zcw].(?=7>hM;2eK# >HFDD b FрG2&"-FkO,NR4.{̆n'+@-Mm(wG#_z1~,$T]^yQ΍[4hN6kGIWfj ^6^ni z]5b.Eq6[5;˲G_i"#$RyHb1<KyN x̤ p1VbS2Y\ FfӉ9)GP=}#(^\i-Z|_As`χ9Puy?|Y4sv?↞!ꆺ:?@FfDs$~Apܧ`㏝qɩ(mn! k*{ S)")r *"d5|FTl8,*Lzsv#cؖ _-IuuB fsZH1g9X 3.!ysUC)>ԴrWpCso';R {=nR/(?_;:.Gxi1e6qlοw*Rw>UjjfvL/ՠ].UP0vY4a#A2rVk/;^sXeKZl*2[(S g=n<S Oaԙ{)^~CAs/߽ˆ~16}?J0Mxkx M'3m7^=ip0QOq8jA 5 ނAv:BЧ5^ nq+Z;mʃmNG=4W, ,ښĕ)bISE|!ܞ"$s4bJ>1<&$q$~S$b5VӤ&PC\(wp)vF3cĜE N!J}K 5`5f|1i;~ZkA]? g[FݶD-3="8ӕC=?0B'eN{gq1j)d2U6%Sb'zgqV zr8xWfVށFbCd@S 瞉(Q'V:c9uHV-B9DŽRn&x92JA.YqGkZ )$T-R8gζW+?i史4rV?,U3'ݝj@G 1 &NݕCLgt&W?.Q34ʼy ř3UqT5™8#q(N'ȝ:"Kwia#4I8Sy FY; ZDl ހ)a ^X1άLq/ ]r6s#{|~>L׆sP *fZW ū'«a.O]E`v-6Fr<% @ .tг6% Kz. -:bWKY](d.S)T3ʈC:İL3,LZ8sm>gh CR( os$:2q7u S@}ƜcS^otZ.6'<_RYroi뉸+WDU]/Cx\7b<0:VE 負ɘA$3́rQ}1!ZHL=֚ x**8Be۔:7H.#<mnznyu<Tq\niXpa, =ZYK0XB08;,*1j 3ꄷݞ\s z|2)t٩9Zh[ s" # Ag4>HX_Zq֢HQi$S$L(%e,zp>#ZdyPP :Ha WB+'fk {Ca-[o< ap) jveZBxqyVze oK }M/Љ&bl4?+NQ>KYB ,执1TG!l+Jh4"BV ̯ њwA(7t>(knՉ"TxGDJNI4(fEi%1` dj!ܡ+T8mXAꯝ8b)+d@! 7,"+XPܩKժI?ݱj M H44bH8*lTZ&@-!?L(hIp5j 阛S`#Q^SPRa3 " x,@ ʨzS{ߝIjbr&Z|q/ìsS̴aꃲ @ oM{ `3 ^yιyˏIo xWBAw|0ػ΍}igce] )D0%~RL]Lnll(za1%\:xaB-k跁{H>O:FU]?~yn sir7wߖ~_r]~A$֒jSR[T*V8RN$E&9ĉsa~ R:[-ZY|œ//7axB y0ERٍɑhg_]; Be51TZk(WWYͦa4`ǣ:]twFZV4Z:Mae!bx_;8v.g.UǢh砎g|~y?Woc޿o~|O2x/ 9Wk)}e?vZ+ OSko8_}X|=v݂ܲ[;"X \UPߪj>UWS%p^H]܅(.-@L ?R̆.:N~tr 9[WQ7pi'd򂥫$E[lQx۩ӱF޶eM:+9VXF/,Q0 w3JR.L0J:},rj٧ 0鈨h mA#16!)f3UϪĹ}&ZQD$ėA w>lL">`1ʒs RPe D9rD%S|m5]x4琳˞\yl dg~7E@V%a;]iû.2;?黣JQFs0)1mѓ:NԚ`Ɯq'孢zޢbNw73%hQ1x}%s{fvo˿NQlHm^n_H}qK-,G#gr_OGbi Npw˴gd)N}+Y vl}AΕ2Ʒ~(lMl1`) muFv넾9ƪq}*3Cq,,d$!-:rJMm@|BtZrmXPw‚NJ}r\~{NqX|9UTC "C;t8SbSFҖQ8Pr HQ8ZGD&'B):̴ L8SȖ:kXhbQ' =t͖($-̎> w#z.n8l}}dž61Z,0y!@3qJZ ",bt  O3-><4˓P KIGFKJYi|$T<8.f`.2{l9y‘O)0[Q<0hx9ej)7@cRAi}/ ҷzZl*PT۳;W H !+F(pnI|=^%Jj MRJAi4C(|6)Z>ꠣ 'PC MvU{Ogvk2]Buڴ-FNfNdt:C=-#txmG^!hSSYI* y2]ٟS.yNh͌奵s<\޾΍pJvئH5yI<|NPr%^}w7?9c=fx/t'W=?;7%$%mАQȕ/g$Nqzz?~k~7F%ʸli]9xtzOKbߟK9p8ւWI]_GL_& %LÃ&)\'o4?3@! }mȺ:?--^lyٗIxqh%@~E>_ל±ߔ,x6BLr)VNj~mj o%[u"sk2L#͹Gr%dۋ;-`-v\ C~olw,SK̜yHyͯljLg:kp5ܱH^%VWu+:6x=z,e&nvQ%"}^"Rո ģ 92o;lƒNZYd9C2w&f\ %Zy^WF`o^z66nXOkIzZn+۪ZٲTvO6sm7` "6]{[WzP6$&x%V\Ȗ{z&{aYM \oۘlrA?U!a ǍM(fA(3fe@y," -<"gSriq]9_P8WC*2tR=ʩbWC"0<ό kP`WJ\Hˎ[%$ γ u%v/Ѭ$AT;:LB%h UbɵĹn/!H4E>rM!^r -Jn WtQ498DNʑ@pNVsڹ M!I_1B:bB]%VFļ.DDp–t̑?  /)oWh'GqcP J5}6MP"F` w@wJ9 +w둎R,݇~1 m(%!xO@Į.0vRQ+#\hcLvU{ ! 5}," ٤2#L.BΡRTOIzC(aB Fz966t@l.G!J, : ` .po ݕA$oWP;y]N&'f<)&c9R{йŐ ZB= aAr],u'}`9G'GH RSb" ?(.e1<m g@c{4QB'f/dH[}v%EN^Z&% !i7L''Gn7;K3C`s8`4h *l!6Z] 9i.? xB1 &J.0`bẁri벇œ&OQd`6;<2\̠bT2IUAM{IПG:ɔqO?$} ͞t  *j\9@޲lc( \}%vIQU-}dzo ?c[M⼭『 in*o| DBD- HmVʲ]r-4y_gztp2; bBRxn%a˟fҮn';PmcT$GMoi.=k3q-zVOv` ctL?%BH.ЮY<ҙ_>cyAD(kDSAkVe?z<^ Sc-z\AicYa.YSqj˄T%OYM)<'.mY L }Iʻ:z灾p9y20Bh'$QĹZ/Y@g"o\fQo]\/n&Sy_1xYJaYWqq[ |0o>lC2*)Ql )BHP>FR}ۛLfH,9$|2&JTʂAF 4[–5!*4§\太Q'Ug CJRf.3r +J:D+ĹCyS5b-㨱-]ZCHdTx/6،R'F-B&]d *bBS&-\JQ8! Ng'L^D $VH-SZbZ\]jI 3o7MOs8imדzGtr͔t>5N6㦣+ڞZoeU:-l"q2F%!Kx`pR(\Ƒ~aԽ4 8̒ 2=)EWot9+on\msհJ5[Xmd3Ull9lnれ6,Oᴫ7TO[5/4~<|bH(H!*&M*ZC:Ј1oL"¹e2-,,̊)Ξ/\CR)ILB/m&ߎ9BAR`rLEjj&GbjW= -#^LZ\!rUdq 6!Ϲ₟cxdDzgv/,{,ٖ[VkX[nxXuNd(VL1:iu+]4ϲrOh78SvXօ 0bddc_AehJ#(C`Y& C fpԯcGl|Qu=NoL(M &ǥ)f.c\P2_ͨ" <pl\ ud#k31ǶubUJ)H: z3=g_ud8[)a)9(/~]# E)VvuVC#ea"H988 G^C~.ljfnʗd6$OXuQFz|W?WW?~Mg}sUk0U/`$E"%49Ӿѕ9O#o(r̡ )/ ꤵ$;npBd蘡^2}Nʢ Fu$)q_#UcmjU[5G6&g_䟓_PֿMv*^/yi>9:+ʅo0C[n>]\ш9P ¢D U#=;_eͶ`&^ocYL'?ٌ\n؇6W+˞:|1PZ͂P)ج5yolyt IWf`q7)).NFЪ ~bdè˗YC5Pޜ;rn.ݶEƒD{ymѭCrKe=EwF4mvSwB B kv_]3tT[ĶȆ|bqΞ)i/g >sew=|w_>guVGw7sʼMşwCc=4W:!_v|w[&~UC>Jn陶#e<7}-0Lo`P\\NkY v\J+;;P f0ɸfSqWZ=5zޡB dN ]5?wz⮚fYw讬$頫f.SqWZdz) JbO]5sZcwWM[RwW]T'䮚t:͵dRZ5`Y;zkEܕZqWlQ!ek}0#*].r{s~޻>iKRMlCv1 llk Z×ZxQ`;pѾe9oe^;9Mm~9>_]oVJ䍡}^P//3bIt_p1W_K,mlUwmֵ6\׆p]kum ׵6\׆=/,ӊ6\(6\׆p]kum ׵6\׆|]kum ׵6\׆p]{ ׵6\׆kum ׵_p]kum ׵6T?F 6\׆p]kum ׵6\׆p]kum ׵zLyf$tϴWk- f|# T9#|u*@ޞ;dTƌII:;۞+usM^O ThIEa1Oʺ"F5x@`$lNG|68oUJ\ DqFI`QgP^ e3k.ng<`GDZc` fy;fi\~6+vvh}glB@#Rjg΍,g)V eTtH|\FY찍 d4:^KQZlH &O`hM fU^33C1gJlQ9H*&PB^f| MV<9jO)DQ>Ed`|cʅnC+¤Ո%6QLjHJu$]jCŬ\@ry5hLs,1ʐFz *sQWtJa"ϫ"j/I;I969v]RkMs]XeKojɠTL*Ɩ&ҍSå6޴#D[. ֎E1\"Ŗ=).w>XrL8E!J@bI+9m^.1dr:rs ?t5 Tr`&8m3c*XnYc Ic;o!]]^ߌx'1r-^?H.fhy]陞ە3KiOGWj~J/3wC(J/IEVZIRj6/Pj6]yWoVkZRI ~l'* ](b ҽ }8h7ciWMN5bRO14R⪖eݰIbH$!f R>MN*16] 6 Ct !I5rɎU zRvƲKlkݰkqܓćC GGW7mD轕φp y+4dkZ?M25Ƚ$ro ܛA y@:Zs A;|Dڪt(6},B"4E!Jnjޱb_A `2FEKtig4U$19OC^hLhO:ŌG3( v`?7X_;9A2 }Nߧ&Q߇;Z9J6#0: !)]J;xS<ѵӈZi~G&Ԛzm`ŔLFF91l&R˘&(bV|RT*j|;C1IԲdkC/ xj6>Aq]/XB3;-Hnw}HmWWg"錗%]n>cH?_uEɔ)RBŢ䀌a $>uHƃLڦnM1&t FQ-Rbthr䪴Y[V!k1Ò#idkPLlO%T7iou](KvPtf3r}7d񰎽.ljN+p*˦\B <Ӫ:!+pNdXJc+E~u[#]G 5Giʏ<seSdL|o3l.9i׷ ^S&껬v:_ǖGSmPraᾞuKKX_/քty{>>e[YbzW\!Ͻ ế//o-;hv;4GKݜn˽VgNԧXsk-u ێ8]^ʚWi/YM-z)@]X %Z$PG]?~`<NkQz*`$-*I}\xNת6ə;k}?~)ht3ۚw9ubf;. [>MhC@=%vpEm#DXB|N) V%ٲg#:(&q܁nȋnatEvDEvErb7SF J]"m ̘dHJLt)D4I5H{hqC/~h;guQ!*fa.2C#AL^avIʼnh'$7WCPEE 02 D"#Ke.y:77i{qweA]خ얥C#AV+vAa&2hMd8y~dD^y>J3&N/of3(M'ZMpWܷ%O R m3W mV6+|ViX:ÿnj&0-$bD600男^x@xZ:a^RczҏM#p'v l~%eUl dp T#D -'ds1ZOI $ηCfǩ>huشbgN.P:hHB9BJT"Xs+}rL\dE*ҁ5A5ͥ+Uf[` bFHF[ Hl,%'k$^.J'sx'Jۦ_4O w͠{vSˮ4e!(bl4<^oԮʗ˩`"Ä&QCc""@esk{zHUj+TޟJAaw{6s9.Arl-]z2{O%b(vq=hx.RCZëS 5d<$Pc&} Ǐ{#AԬ#mAs0{\1@C#0Ճ0qHY5FY $@.' ]p:gj&iYO?J3lyY(br@HELSr*W PєyGژ wm_!圃~.$mE0ZTQJ,EXlѶ3h\wg~;3!\JlB O@ViǬQFYJy>%lozF5Vo-m~w Y%}0F<*7'|uSd=J752 pòhf,S98y)TG P4vڊ kc&]#s+k.lue *#R"%Ift03,*XDXI$ n*%CzaAV~ 뚰n%I@ HaʬMXnfYL>EVSh ca|:niAt\#G:v9 ` |(P#T= NvԬ &7,`$kJ=F Z uXq 0(g5 ʨz [d׆swĭظ^Sf,<- yzl3>\}}2CkyʓZ nc@^!/gvbGمZSiv%F"r0F'RP;0EΫ{J`U`3iat+ljpX-33OllJ)[m>|U]a'֒jSPہۦ X͋qr@9AM3'>&K5*a([{u6o\kkm~uylbU1ȸ |i>T/gv9 Gßgo10|4CGb6 C6aVY=QO0i.>.cӫ^NoG_rӨU ɩj Is`G!,!>hu}o;Wu*'>;3v/_J~7^?u_?0!M$03 mO%قSKQ/\ ];K4_v[Hq׏gvi ۮ=azh6C7l3n7q9}ϝB9J l/p_)Uub"kx=FwR>r $+*.D{K#VQF!kLEȬ%}lv6>9j@J_a`@C  ؗx!3 6 9 zMo:{U1h[i}ٯ /Vua;#A=i>%1Y>Z\;!sF̕,$v9o]>{ .CcEav#~ sX; rp7m7r>X!1>bF8rRpKXE@@E)$1/an$C)iY"GOܦXaR2AfMbP4TRa`HdDԈ H!UR 4#ir9[yn+nvh[:q0xZĵHC4Ph`6xDIO/w -KIDMQYi M!p| $4<NjLrAg'uv~, *DXL`0c,`3A L{+gj R(Ĩ5upDzx;XYXՀ٦D׌K]K{0V#1Zbw֜GhKa),t/VXWSB<8UH/G3%4C:j!V"O0K"*́7:hN<XF}8b2?-shW uMI^5PJE" ]( krx?l)3*gRbrSwI/w)JR@V3^5z7kcFZ{m,FI`]p۰Gجxӗ&NFIS9m9ٽ&=ZHPIT9,2FUƃ>VPF@1ŠtۈV>)() &WgQѵC״ص$距}0 sz_%5Ǽ"HMihs+N!F53g}VC Pa|-M[Oy$RGZ*sҺq)4o:ŹX!PzwR^ܔIUVSkh8v߬h_V_,=֠lc/E(QZ 1/)Oez=Ȁ@C;Aw  V@vDI13wV8Q Elf(xj@{9q z)Ms]qJWwS6Pwj g *`{5IQC8 Ѱ ŨH;-# (h\$В!O33L?N=|oz9!^`+c#XhiNK Dc`fdikx{9м6C-3)~6ICt 2ӲJs"lu;by@ ~ LП@t4r}STnYK{4!`B,![eqث*|mE.ʹ_ZzQY~|rѹ|N9J$壿NFӑ Oxy9syYi?- v?/rP.ї|'~2UH[yxz uzj~MAޟ=P7ƃ#ރ^hjfP$]` 8x_fszaw_d4\ & ՞jW56té_Lf\Re|cPRvĵFz~-ehǛ-M\6NMLh4(%=) *pJ_lq7ߗT+ nmqq4׿bb ?1 E{Eޡٰ6w5'\nW $7 TaMgo'UIq)ldjNޛA" ICh4NUZALU}`=k x9e`QE-+S>Hci?_S !"55GɋiyZ{B`*x^n~|Yhd{՝_^n6.o3dXd/޽枊d^ >ebŅ nG#dpyU?KO)LR;\%*T*"íD.'mDWJE:zpũ`yk*Kh[*QKU!\ -+ SJ2J p|W$#T `JLپUrEcbL "J\-W@kuwJTwJSRM˞f"@06l`VO K6u݋ K|r-c1*Nnӯy16Ei̊1Eް<C<͊x U9bO*N٨uK]}|S<`s-:!YLjA=م})CľoNbwŢiikvbYUDzMk`Sj:Rjsfȵ1Oq&zG`܍M~p6JTZ1X>q'U;M˶`^%Z@rFϵ,wJÃ&)JIK&[y eE0ߑEö\zv8fF+С7"W߄Z:ވJop:cire([?aUr^+'gS37z`$',Kwh>~x4T 29Y> R=F0Ҡk|͙¤E(S)NRN?J'*%P4 +"G߈\ZW@r IX6T"*KZWڻv0WJ&;zpXBR;\uc+}JQ*|DBmDW@%CGWa1: jrFggab0 kphrP2\MJɬsYJ,^$aY(+VdoIRCsH_!E*#36cm= ?Y~ȫDs1#'VkfNW d_OfF~q"e 6]rR2qiX[{-9whΌtM`K{ _YlWo>53&wDtJwΔ mq^ṋO}$e//dٗⓆga2p-;;M\կ>B@nN;F]}eP}1om?x;ſy}{Yr; zA-v=1pP^VG.^ߓsW*c~/w8jW̷_OfRa=:>̴&>kFsHka"aܚf&j^[JG[;G6f#Dg^aEm\=Tny*xx\W"8(׎+QisU4_rZ`v]rw/;j}`cɟ"y)jtv˟iz{b'<'xpfDTl~ \;eU^\g/ LZuDWmzڱ"8a0 ^Q0-jɭӢR gi# 5Jz5 D7Wy2!zpAS\3֎+Q̆3GVq \Ap kKJ; 8Ev W6Z@~\GZY5 g r v̫wWPipu ^~\`O ri\Z׎+Qg,W"8apVQp%jZ;DٖW־ޜiuƵ}쑗v ͐[e Moq7+bƥ!wsGR*p+hʟz{YrɜxfZ{y*׶QѸy[yO~,^(FW W+][xOPVW+#M^w|5&aua0-rPk(ӢІ3ĴW"+lzD Jv\7\!X޾WL+~\Zo֎+Qx:ibVJ7 DkǕ:( W"8v4N1(jYWҪ W+9pD r0GVOl:\}г7υ~B;T^sqo.FZ ݟ;m+?>B~{osݴW?ܢiE{DO{"O_u`?@oGyS]7Fo JvooDn廻7 39oW_b4+Fvߕަr}ķ#9Mc?Yn`>dݧ=LG̔w{p_ܷ"P0R񙬺A?;<.,WG@QżH~Qx:{?[&:@x^]̔O ď3+4}YNZ[(Hem?}%dǤ{cKC%m\05Y:ˑQQ6݅DP2B]w4c;$VLw"hA.'{Tjz-V٫5`g-%e Z1Uu`fuN)4j̞Ѕ1*UeV[rT +qT/1v:d;DdtRyƕZٷ*BE÷0zt*dՂ6'FI)-ַZCbY[%k`Dbnь}'+S4)|nR-6 իabwkK8߭]HPu ɚ,56Ԕ1bN" =vD,=%:ɌaZ;B4cv[\Dߘ\t51qZ䔔;y $<ф`z~ߟDzY+EGa7^VA4P:iʹc@L]4 4f5|/ޡ*.!; F _ ^Wrmܼ~nݽI* b Zڋ#Xnj(Ϡdɺ3O4x}asxnNJ Uj5Ժ9ՐJJVĠs\VT_ ^ܷbX%]DDNE[RHHHI?7 ҋJSj#JK}KU.P )C$@U5/ g\mZnت]&FUyR,92bJ1U)PAV]|G=uk,0sS2 f;DrN _,))D#;*D{jڐ]jAƤUȗk͔`E)>#(V/w{1#.UUQV:G$''c UkDBL&1aV;q.wwR5/k(~W f=f]tަja&a-7K[De:ʪd &rt%C2ZRUF"1,> !'i֣EŢΈ A9JHDN̫ f/kn #.Ne1ZPdh@^,Z@mD|uj :{6v5Ҙ n9eI6ܖSBE0ٔxNJ@LEWXPl5Ne1m!E=MH#{t-٢Ac8J,GJ؍l:Bf&&U,fWbe@¨U8vGyPkU#bQy谨 !SWI|j8ˬB A1+'J[b~$+,Iʣjd h@eVo Pʵr'+ުj^"ba-##-Z6XªL$hl3{ŷ`HWSr`V6i`;zy}e#(:\Ӿ~2IVY n=únI@3)-z8Ac˿3೓*"bjЭh9֚B$KBq)'ޮ IѰEmFIJApPaRDluXck"* F m=E7NJdt7 tЌ2 R.1#K[@O'4)wkQ-BcW,Pa>_ЊDqmburӰpMr,RDy Zʢ#Bآb$abXX~ zbmT֥21LFixc137+vjrҢZ4F '_&Lf2 (s뿰wmY~T`f`|Il !)9A|֋mvwh$vԽUT,f0v ʵ \|5ƛZAK9jAiB ώ,gfwhY E X0#{}|x~PS@ tQaa(1 -Эc~Y< ksφbUUreb12 4+{$$p?u%=)@`$klw^t]pwEBNeg,0B L5]za I;t bЩm884<Ɓ-9wb*o9嗝.:kɡQIƃ89YjW-C7I LS 2S1cHgÝfd1Zܞm3zc6<tX\͇\3+Nq6ǜnv7e6jow~w?B7KP)ܷe1 +d1b jhyj sP?s;|/PUhmMBX9#jtֈV_ 1@KR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)NW d5)H dq%+p;0;I%w`|@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H tJ 𬄶)%U%5J D)NP p@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJV I N(N DkY@I tJ &'%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ Rt@_փ5hh~`) //@S)ydH7#$\RgWp jSp Z{.p$K5b6M%}0w}jS:y=gPQ)1WW[+z ]|=^ώҼkq(^|˪sFs5$_`atuufUW4~^ɧK4Q&TY5دm "JAWyOĦgL j/R#~p}N~h8]t剮\wV m ppՕ>;|x--KD3"C-L_~`6],f14;jxC.+u^/VRh;Y MsWZBӈV4ԜhiZhmMV>t(':A[*+b#\k+D{=P:Ktut *+,gU"gJ& ;W ]\]t(wN\DWX0^ ]!\^ڕ.}+DtDtu:teVDW׳vp& wB8NZ f++YjG Ruut%}eyZGd:_Vʐ.J@ˢ={^07Ua!qYfRmR7eS4^J@>x0H}z}U"A{,(wҧJ_]`T5tp)/+Dk\ QZOtut%WǗ'Xz++f}+&Btut6 K.+++D+{o]!JE)ҕ)]] jv-pA PjCtute+ݷ8Ptpe5thM PrnNcʈ ;^R;vu=+@YWHW~]]!`/+cB\ $Jrg]ﮓz?|^ I1f7ݛ{G?{#͟{g|ܦN"B WDWHWjo{|p9T x}򒧗[@ER454u4(='>A\Yn*+]5tp+DT QUytJUDWc]\y-th;]J ҕDVBWӕ7Atu:teZWDWճ#p}5 U #zte5ǿeONUCWZ ZNWJOtut$S&BUCWgDKwKl!= x/y0?|Q(־kݣ!빪eV\׌NM[<p}=Hr'̟<=ĦgLjOW{֊{hBCz(+Atܦo^턹NA;fP3gVL4{[1eE4+TCTCӈپ4:97ie"BJVCW$$J*ͅѣk+Dz#J/N״fǺ[ Ut(&:AҚ9a++ ֠ǺRWc]Zx9]JhDMK؊j r&k+D{Ԏ(v87+v"Fj j ݙN '\YuhM׮S+D]"4J-)Mz}AR+h-u謆5u뫌!˄z_ ׄd0nVZy/Wa.}o,6QVe|lTV)F`[c1a?^eۜq |CDue7/q'l׽v|˗_x3 xKexL~J%_Wyו!8Ƃ4}>_}: ld^u<ۻOÿwws yϛXꌛ|SWogokS<Bd*,9UQ<ʤ"cUֱcRM(\Emp.6:bf`yir9Fz ~E m)[ᘋ\1#pѲml3m ^QOp_`,K2E eo2[~~h0swOغ[$iGL/ѮDz7e|n~ M>7av;Ӱ.?ҺTS5 L-'5I׳cnZ^ |M3.W-SwtwVyܾ@.ͅ)&ѶY`>ƻTI\2c-ܤ(f`;h}*u~%SQc9M9Z\:@=jy~}3^$n4}(޲NI;k.;[nUo3{]^,' Lrv{5G%H5j#3'ߺ7ճGrBf:_w3uqƂ!Қ}\9ўbQkxl%ƪ {3 J A8_0 J.zJri ncmf׶*ؒ 72paIqf5Uqێ{[g$LvEX,\;2^+xFd1Z@mUJPgfZv 8ɗ%sy^/>yl-᮰-uj ߧkU1j+v.yw[zbև[0܂|| Cw蚫0g~p0&#/tg ʏeꇳt(>\]ŧgP~6㙯Nʇl:} Xv>O*G2^ɤWޫԸX\̷3B4rMd~UO_Э6+8ܵG,sٹv`?Ak^N&~òeeIgn5*֯0%UmOLZ|g>gT*+mPH,9fiȇHuYHO&)eDP d~(>HQ#՜ 6zJ2&)JQy9@F%4įvFـy͐㗍 h޷Ϡ*%3M-;n|zi]BҦ-H!I%RCf2GBKǬ2"l6DmPZ1gH!P0hmMY ,ɋw4YoX>9&I߹D[ZhjAEFB /@8D^*P1 [$z>)I(y#Y5n@~ݷ*.U8w`@;P!qΗ~iׅ܀)@ CS5A[0r 7܀O SpK )&dV*X}` 2: W66Dbvu,zcgֺҰk7⠻_!;cN[=9V/euqz`1U?xBlŘ&d&X0&;-Ր0g޺3?+IQtA֬6ۆ(P-]QtHIPX<:kH3Q@HY I Cɲ)Bd / }8!az9ȲXDp6[ϳU#eԘKoU`rCHig?:_^U7pƈdj8t z(D]{_̅QwQ$H^ D 20#E:RsҀr *+pjY}ȽB"mTZSNV H[XEf*N$9C1AgwN7srXs^WWw/@(hρJc(X& ;/SW?aIARAA$Pq! kqAi)58UwD)QJ͂ƌ}fTD f9IHˈ9[iQTF_CC#^tش%eT* u[[HT!FΎG@]}ό5liIi})m[n:_ܳQ%}]od Y5%oOHYavH5ILj2B% 7KS6{)y(&b;.mFj'^pf)dll2ЀV‹&u1a$33q;:G3(pE=k9)-^YԦNjIFi$녫Wzř eHD%fo JDLcZ&(եg9;T\DŽ.y mN|"~ O$usUazܠq&X`C"4WIU}rFb0+:Q^~ ^ty1fz{T8t?z}φ(»K>O^Ya* ա xA&ƾeR,6z1v{1as&?~fW[s(ilbju&ԉEŽ筽 4m62p儦q2'sQ23e JAT|AAd<<4SُOԜIy>Qs&s~֊h'6dHjs*@#eȖZ*F+AR7]2T<ø-jdYme;@z(ǗdĈJ% CJ V%k 2 V f~˭HE-\."TX1pZ$!2j+9E wMo<.k5KT s LYO;xG<:'M?5c:6ˢNv`f7B@ _^|}DdL.hR AdƤK&kKuOY <*zp[Fwl~rcY׽ڹc;Y6T:+v2_+>NC`I:Y(H^YYLd5jo(8Jɀ6 S|}'y|cjfmAwUlmv0adX݄Y}l&6!5azT5.4@I(69xEwF ,2Fu"c|v6`٨,DB1HWDȠs &J#lڠ2IY*%M#B 6eec˒Hг[7rvֻ7@R2f"5LErƳݚ)r;-wأۦO;^˭'5+#w9ʲV裁!K6SV¡VM6B1W@cŨo:>WoRdMQ9sF3sfttšDNJ[@l>~u=U[9 yW+d[ºIntA/)*hA)PjduCRsBF&_qRMk/c}! 2f(+i `-, ١ˤrt +H/(L^$A3 QA &QdKIa 2tƠw! ;(cvcTi9(~|%g???kOKMyT3>pI5?t&d̷xVu/ytNC-=98G9 IZF\ x;P>( '.ZKqY'4ad~[Ǔm}m\|^d.?jX{=vw\=Č:F9-]8ggn`TvtkF˻G۫kl]T[\7l 1V-|2?_7+M/joK;?]# >v A9&"ݟx*V-~1|Y-fCݣ*Iv5W+u^y%!u`?X}=ϣ2Ʃ<ӿ~衖yQ_3;7w?O??O~wR7?W9֬.~5 ۝- k`M@kz+nV뷣^4i<5t˪X .tߝsQZ}gN_Gl_`bY-ȟu7"G54Z?{gƍ:gU{fk;W O謧%'NjuHE#āڲD ݘnS|^ m>lh>7!NG6[݃Sd[zիݭͥ#=CNGb<#ܲխNNZ‚cZ@ t;T47$Xv6#f.M7{ӆ#Lzd6[#=cc;1lDr捵.6w,&B(tKs hQW.§+YŽ-EL&3+lHܵ:Kk [k3wMbfpgoD &Λ.pEd8tbr Tw@*ÞGYR`rN U \41V;\r<ȁ`rg{`v|`XW6XmDq9لaN B VexAN [sWc[ދ.-Og-xwɁwϭc .IK^s0U,D<4 ( (UR &. 8]m`*: Y2On vR`Dl؍ÿҩ^h_+xn;o؏۠,>/ ĴW2&C/p5NI+D,`xb -g]\=<_yrNIȤ"lK1{Zf] }k\VJ#" וylv 7?_ hާpaT _Sn}S-Q+7?0_oƒ)Z9BբmU=EGeiڍ}8*ݝ7pDwtlߩGُY ֽ¿Zt=Άߏ]]@=?7?VaQaQ͝W~雿ng?ݷ$d=4J:ۀAĕ[~qvyώWo7}?%Mw]&.}9?|KJt\_h -:[^B`u5˗ӟ?v{#NFl|ŗw6knM?slɩ[ݵ 6gm;SJTu7麤rs]hl m"ucԅרю )ܽy/4C$-p5N3k?z%,<-<ܵ;^v:䞷y. w>xzD˜ _LJ8Gnפ&d̯w z;n{ {@"0BOZƹNysS=aR%蔕R"$-$PDJIK^gY ָs-);z7@D3UhA︲mpb %ĝ.M %Zyz4cՑ'+~J}.=F|` w%|?&`)8J*H("զ⋤֊_$_bEi(W$bpEr9匍WW\MWJxARV Yur5+5Ǝ+W\MWZTABVbpEr_t9 W4;HeWĕq\p%sbpEj;HrWĕuv]\Kuթ 9ˤ-W$ʱP.gP HPc\[*+\+WGu>~_D@C4QoӼnV:OGqC7x/jHBeCp._j@tPJC>}0wjlエ(-4R*Ѯw%Q/q ϱ]vl4׳smQ`91Bi' FYhY# `m(2Է) N=+FlgMrɐ1Y4JԀժ 3nlt!: ɪuӞ7QAtt-BT"vY|\śG/zr1b_°z%;FO7!vE믮aM>1SG0 rI8$RiR wQe:_3zu@0-^9~rU{U?Z WTS kQkW$7ܷMwpZVuBt;L4(}Lf?,Yo;=aj&CZ92ͻn\iPCaS Ic4)bZ2)-W$t1BjOW6D?U\MWiJHj] HuE* 4spbpEr-/Wq* Wĕ\ y`#ٻ"ҕ+R܏wj+^P`\\^r"nOW[)eA"ky2HjWJ 0 \`(g # 1v\JY'+)$b=}gS rB)JRT(Kty͌C Jk$=ڜ^}v2^O`ԚcTZ]1=ALKɀp%%+k+TUJV$q֦$'?rHsgF+RY&+-A0(W(X3^ Hn95Ǝ+Ti]qeԭc[Ǐ+lqI+WvBAqer$g*-*W(WbQvBWSβ͜y2rT 1x3R ":ˋڽ2kмqXҔR4k'[Ni^w=xnuzXXK#vez S- /)xC9OQx4V{4 /ǣ1[v9`Z}~rŞW~*Vzjsrog%'r&H1G EMiD~r-iR(>*bz 0 8v(\\UUIjJRiMq%-s$'+Q HV WرTZWq5A\/+e%P H($JWRˊ J[#*W$ؕ+ J>z\JQq5E\ \Nm9[$RpjžN駒׽):A+ \Y',ƺB\WR)9XI \\K5l"2LW [9-fL|5-=ROlL%JY?QbTM*y}T嬟vˮ́㪗=)nJ`#{c{V\=:-6PCNh>+[ I`Z;vLJĴ +/W$Rpj ccj.W{~rqI=P%պ"4ce١H/ W$>w0B+Wq"2L VHpA{W$WRpEjUJYqeq$gd]r+RqE*AU\MW[gbprpB};Pj.r+zb T$$7FRW֣b̠ㄮ4V5bco%;JC}zVTڳs[0{hz {Ϡ'w Ԛ=mSiGAzx4z4Oz΁pE-W$יRpjs+R)Xq%rj !sR+9V ۷[)(xD1ri!יTj*'i) \`,Wֈ;RU'xR. W$t1B/CH-T?{Wܸ /cb݇"8:žvݳ#D"9$>6oVD ( vGE2YDw+Nj-NLן,+TW LG &LF]t)b+3[ؕeSWG]I+3aQWf l5kYSճ+(>YW0a]obWf֕% Fue*hJJU3[Kn-ɸ*z':a+/Fm?au_)H- &+]2Md̒AnjQ;B@/-BOec8o3Yrx כ m48V\P} ÐK>IwdIrN%KƬMFFfUN=)-ٚ-]SW9*)c]RL >@*"p”MZ\F{9Yo"7'qNN\ҔS;-AamWrsrngK4l*/]۽uERVwPf"ŌJUd՘cAǥZk p#7,ªKELx! d4X1roKep"8ҞY 9'w aA2 Bv9az1 ɿ&xKR}_|tدWwC]z= Œ ʩ(`Qc )Nc$E{կٹ{k&Qa m xԁgmp0)qҤ;"E|ØqO4 4 &a$yL@yAG6q ,Kc"e)ye`)6 5}5II" ǷF=̆cFꚗcKW0&Ɯ5'=Hjx?ɥ&Lb!hpp|Џ J0׼a"9^pA4aZrJ4B¤ (0? p?H*KH\Uŵ@X1&# Aȁp*ֱH~,y=)t*:W,p<>vt:^vd4&5)\q}|@*2g7_&Mct붭cJWSԟOIf|27@:ڛ3"Z XFDxTD ǝ$Lhg}1%X!DH4i'X۝)z Tjy{ʉS*]xcpϾ:9y_K50GꚗTVjN Y9;22 hI_ NA\~-Q8Ntbs[@lG/ɽrUADJSF]z r$BI^L8Q"$IcBEcI=RA Ͳ(#B?aDԍ;l֍]֗;yNXBU&+)[I^R }淽7p~kDYf "g0ɲFsd|šmy綗wuW ?VI )&>Fhw|Ɖ+ߩ.e e;ț`uz57qGק˼:yq'Qs"Z;"x7g(J" ;. DPGƫ%4ycGMRùj0_='|a>G__k2ƌf4i#`%L8`1YH=*6,Q=i% t" o<'YsYxO e:GMVW;J뫚Ȫ3.u!`߻3&C6 6ugr۬4`؍.OqaRw[`K0cIb:E|1Y͞ -DlVVEAA9f mVVxaC}Yͻ#1[naS&>n s܅ܽfXwz荰=-!PzX&^.?䴍6k4I]{>Io  Ӽ6}!5]#0/V'P6vRF _+NdgKl XR}YLw/m,dܾCޤ 8&/ۄ]Ԧ65XQTZ6N sypլcȓo_:0YWPe66k*f 7`v5 R !I7m"wi&lnbO)Y/]Κ`T2V:{=d1l__lfg6&Z#VPD&kVXbԭ9%ko C'饸eee34<%ݑui7x$v_%iH4:# 5b@a$)lK,3 RBp2߅iJܾex"VKCjI .$~6fU'Ǡ{)9Gѩ_^Wtܬrs@ Z4V5|>P}P4T2wv^"qEלD"bc0(iʘ`A$PBvԮ )B]g aFy7:׋C68&V|PI׬,袴}+%R_B!n"Ha dhQ^Ɯ3so_<_Yud!|ٽ1Ak-%KV{~~ce'3KrvuE5s; S0E "%윳}Mb4p.-BOQ8WB{&Kz3 QJ{w?`5qOa߾wsɪ1a-sƚƢm:#l 폿_5luӡm~oG&%/RBAMjnufe'7m,}%ZN KݕEz;\GjVr}$D& 0[OfA(Y2Z憾^AçȉdxOEl߾}*U>)_* O};.sU`66E$PDoJ^ ,&H$Iw?rg1pOGNw 1*  a'\ <}HcЁ罹9=(fT4Lt|/<;7.rQvYZ\V 7_nfŴ׷&RӫųtdU0|9-0Oi:J7xz)S-XQ՗;xXH8ڽm-?*#Fc?B[Uat V0]mtտ@jc77)0qڂ ?)n6ǰ \&+3i#E>:41"* }.Ed`F0!N(l@ DZ=ZaBy t&Zgl% ~ ]8ĸ<'a^1?}Rq|) MSwgWpi-oyj'N"L˹-P)Sp)u];(nc NHl*9|۷fI~vskU nwS_mU0EC7x [ > =B p\}7C8jzv(> 0UZJ! { VYqv$e5_약eXS蹥WZ# [6GyO_Qqk?yypS ӣvkfFug `\Vv-|ݻ_cKSq-;[dwq'۸D0Nfk,W k]V;wjwQQ_\nl,dMPn>]8{8b3{з%-]J5(:dz/S,u*|m]kSP5n퀪$73i6Wӥ^ ~ё}+Nm.$0=X5;t"IH>΢&L*O_`h9>˜Mdss5ϘB<>;l\gzɑ_), ,[l``,vl,ˌʫYJԙ2uIUWWci}M(_Mv{nZ77P@$V:=Ҍ[F |12V%Vs^j(o҅kd;vcd`6EǚegT$I)dL_= ߹t yt+B'&-4w h֔ySp3D% Umf *MjϪ쎦üXJ o8$ 1ax.nVJFYZzYNdPqIq RJ<{n%Jc^DnA12kkꔣX1oS$F낤1nvH$t O"v &Oj5m`\\C'ݔLgoWToX+[}÷⣾[\6h̎V 2x^yOnߍ1L>"srQ?|.1-yԙy+4smk_Ct{Ԭx҆t~@`+vS)'uPI UVcȳ$t]\92I$9"{c0gFg?X6ڣnƾ'&UR n8;C.\۳^qu yVT„LEmVs 7TAzk'FnPS~療9bNϪk^=vǩGN2ҰO;-4]l?te?M@ ߺi//煉S٢mǧ+fNEo[qzF9 2/C>}bC6˯Zzb\Kl+YM;Sk$λ(,}f$ˋpN{J_^8k8cڲBf6,G A}{D85Ԉ0m@.5r~/|->x!Q ݖd۶qa Wq<b"A(-d;?"4kI8oIťR06v<8GPvvּGjf۵LMBFe:E y bz]MORC^CG5ԪPM 2f#XFU'Zp)\8ZWkuIO ^kcsYіZAihħ BZzoqu?X5Ilyڌ"Wj#v[1zmQTpJlev<VƵїMp:T=o&ݫM"^Mdv/Ncm<}xA:Bzieju7ƽ4E!0`b' ڇeجOǮ`jfo1ņR`vQ )A5[2Z&הRm,]?q8LFV _L5TUg6q_ ld4ʗ<[~^aU_ǧꒆ8GH2R8CKVv 3bYtix-?mOQ!qW:]?W|ΏTӀF^صb&2B^5`;6Zf)&Ly=C1͜dOyY?cD4E{{SqP1;MsPy'5<#1KSƽ'*HS 2-瀉z#@ 0i%C󧲳fʃ#C4d^ UI?\ʩsvN[zo^ (ys;GLrm֡n@aS2|齡J/A .{V4 _# \)PW^BJxOLO1s[oͤ'<;'$V r_>}n؃AqGFn$W 3 ʗ0y lSc 2S)~јZ;NE e%a*&:c%=.@U+dpJV.@QሕJxVﰿ {34McVXlj>vP*0Ņ$qA*$+&9n*j%$K Y˫@qm*{pP毳?U)9G)d{EҧRjiJtY Dƙ=qOS| 3)Q[e#rAzޫQ8gBe6YKE/ n)Wr7q`@QY:P:хNRMetSqr~Y' Eb.H: !9v= 4*KԏE"R e%ޕ` *VH棷Q)EAX~{s0HP%Nm3KxV&(%>_sm(yE hET(9w~'8j* eһ eLex59)*"6:fཹqX-?qXhIhixY콘dw8O|!_ Ys㌕<g>21lѡg'Yj|v\^$>&elt=vufyP -1.L:+ϳϰ/Rm^iM|1[?39`q@ \9*dHDLgzvԃT?L~^)|M_8]Эv^Y3J!IV*}k糠^&#E[* nXQ;U[}*Ft L L{,!lf!&UXF4_jg9&.te2l WsU*F2*H>{8elt@z(Ra%_q{}>')I〈YӿȝuHЉgRFQU)]]W@fp1Gr-U@:|ҫ}>FLv@(.WQb7+[aVfҁR.#=eD[ `b=v~8\N:qe41F y߹2Q!dCb lc-(jSی[QR-U&L2r[^A, -}=Ŝ1il= 0T+_Ao\'G+dF~Fq@TRA~"?&!2ȫ9W l%Uhjk֌ex~!л Pk!΂xoRmB9HdL!3'mUOzIQFr_(H2ko[e+~m06WHiZ;Cmp=2Ũ 3EvG),P˴χ%\Q0&F0(Ζ;)rt9fjzH/ͣBJ'pWdRscvFK?>墘(5 ep.{ʫӧ,){ٛMS3OwY2ϿulBʿ򏯽|Z~T..蜻=~OCVVtYg崛eyϦdɗ(w3Ts?RمsߖF'z VwnZqni8i)ոGx{8 Ua9vLZAt8F!+R׏ #F@=OPI1j9J i iP.Wܶ-%@atLJqF![*`8?S. 7^җm6_!MUʑ?m;%Q4#f,vz];.؟Hը C;v`32*"F&9 ZG_f :TCqSZ[.  DsVYc)Xf :բp  x|6d,خ%cѴz2-roW_c($w.@6cmqX*}+th:*ϧ:=]YIwU㌠X(>iFD4VimYp9Au^Hl8Gn>D+,!ZȤkrI,'siToѡ aPaIaxVLfۉvGU|gKxo<*~?F4,ƠLmVJ ZlU\~_??D}O$ ~#֣:}x<=۰iUq&ۍm{]شIpe~އWƫe\w ~]w>n^n;gf{ Hio{RGa A'M7Sy& V?~GUy>ӻIwo~^i )f/$`Poh` ,Hƨ0PP`T֙ YІ@#>|-fœqKi{0m+5ohg |4|J&ʰ9ejB1o`'5&l,aAl֪ÿ!{>ٮlH 3" nM:>!ٜvs AN2b[;\8S6Dhܲ_*)?pp|7KObX2?;TH}HV] 43L^3IDkvE6~z~N&0Q:gy0> 57NrI@q0VP= ;@eKFYi7**ѓrP0@ >ڃ؈7x{>!oc)tN1rza|s4'evUyof$*0ɞx^ѥ>G2NSeY" q=2Jp䴦lƒWR0K ;N׭I;y6-箽YoD N/<a5|1oĺAcWĺ`bϟBMa.2%復 tBϸlqȦ!O484 H~K On5[wIIir3Bi=%?Hf%6ow|"R03&q,4HE8f&C+&7ۢI@FIt8~*pW}&+Ԓ~K==5t['uxHRrO{Rz䘴w=RSgPF ea\UN8g>E%/urР )FⵘUz-.{o]EoqINZ|d` KDQ" ^hCpϳ UrԿʿuIB2'*uUT%&*żݷ) f5shFp/j$"Ƴ29Qvl)92=] ,01Ex( @ r%XW۹xiwGuUc-wK-sZkm3yis4=1X !x@6RrIf,B/\e97WwsS /\(Ո#EeȤE4P Ɇ1.1X.zkkʢjoUI5#/Fjl [b˵&k %l{a^DV#vZ`=(Cf p˄ )t[wâ-H7shơn C ⥸a sf,2X[xd<ݠ PP ΆR Ӱh)zdu ENAn ֓ ubmc=n鄱pYZ4gܰ5gЈ]!/uWK rS-D6q$`q:q/DE]UZEP{ХrWvI/M]`G.KZI\%ε:$ƏCHT0TGR[=MmAbIZd}fLmu F#L3cQZ>߅}?sh(M ) *\_hm sZL9$%2IimU%մ2v?ߺ:fz.{WA *'ҫw-3X5fЄ>'GV/jܲl2n(zcX;Mj#^>~mCl2诬(K nT:I5H+0^_IR֜lF<(![ y( 1hpJ}Xr뇓$¹ˎ;tM/Ugj'Z#(9c;8~5:P'5xe+XqwR*s=u.8;1#@ȓ'G8/f,dzɃ^?m 7^Yt=ârW-;fhs]isd"ai1-҄'@Np5ȖZ12 :`QJ= ̃ka|qtG~^``?(&=vo2}X*aQ |q^g~_??DyOp9o[O`6Vt7 >|So{ cۯA'.3ߏfIF 7 |t*rui 9w Qn/Յ8苎H뗰!b+Q}Iά6{8ikh+t2 gs;cq^ZaXm䳁qVeYe(dp7)SPjGFoBvwoc-o ?.O'Q,7%Fð1@h|yv=M@\-oC05<(_~.s8u5n9Go>f'Gh.BDRaQJm@J Fr2c_6M-)cXJݶM`Z$Yݟli4|aFU_Yޓ~X0^/ӂE]]䒁8sΉ{z8v_7]+ػuWTJj ޸U,I2$-iQ;PmJ""Ao͢Dž!yW$HEfrNF )pzueC$m],ClCf(t bFHrA*JBe)zn5M R9T4 @ԮQV{hPcw &C0$u>q.k( > |+77R(m2)PO৳1gAg퉨Vݽzv3E-"b<9x C֩v<0*#tmnU9"P+N~F]Qf]k̥PU ߞC|/A\\RJy$f*AL)I oS#8ҩJ;S\!paD})ANdSo)TD AAe註A(u>H8,e0A2)}.+$Rjil]F`s4a%KBsbcs+t?e,MckיM5z< Vc^vxWvzV[cc!ۦHJ`>C"x!)N¹\g@dy']Cchczx܃Oc]jK+j}^4p IR>UlC,<H[mNF+M'>n^kM'EuJE,?\ U]1 21u Bx5,5XA6D;BEo?i[z-n*{geөng;K֗)Y,c8Fj~E!6(*L$Z,G% b C0IoL#-ݰj]boNdW!j9yA ⷯqDGϳVk}ZŃI~ԙ}.ҘD 1d3HR˴?Xxnj8cLk`4?=_?)ƪ3WY}E:uv `I#h3,G~2h6Æ/H9DSŃ;w xQ gXfΟVghzM|O IeN~YU#-kbڼݮ,p<ݖf V+FCVX6o%D1Cx)LJvygfBN-g+ 0#l^CPTͬѻ+Ewָ c6z?>tqR9P 7hxjl*y4s%g'V";%4Mʻ@cI0G?J]MZm(Up07jT<uQ9"结z!/KWC(*מ26K# 9n8KuX&Λ8]L3SKu:ݦ3xqt/ߏNc1%ed?m7~[<*l|F%]s-ӌ4dTg:#zH)q#bF;J$$sUcTkŚBԟz4 -/yz;!ҡ6̠b[V=\R#p{É$vЄw8߶KbΧЖ}T| DEW Jk.tl*lܝrˮyor&Ofb$j{>5z@F߶ F6lcyWRطSZvCC] /RrA3q"7DzX&uq*K"LJ|Ha4Zr^8O p8o>Ltts5}Ax|HSdȈ(Q"g )=]"lVgh촪^{]^Z/ۋʷ@5vۧW2ܕVjp ~#{"}'ZujIS\Zcʝؒ%@^`a;=©B˷8[ǚ9MW`f{.68QJs%ƛj[YÍ}:+SʟI,H>ul-Z4u#;b}#_@,% y[rcD4e8RJ* tVTX*9RoO1Vēer1 5LJ2 Qէ<&헐#oQ  /aJ#čyUFRhnk],Ez['B} 4)Qn=9;cJ,w9ݵIe-j|,^ VjֆBYfEVvCCz<ҬeFCq\&ƩD&2#a;L4 nW<ƅwR-e{*[';g=(F?Y Bĝ^w VNuYпkчNrZ~z?ph7##M|@ ;((7lj +-y̓t7Ra<U#Cإۑcmͯ$ _hhB'=ٕTaf@DdimYw&oHHZDI,Vb jUN!-94>OjPl%J߀A0`ZtO͎aƺ6 ᅬ {=+XK՞`E'8s,+څQ.=>v|N@`@[x~ũSELgmjŞ0ލu}sLlT1PJbG fS%+Ʌ@XXBTQJ?U"#࠾ ,ڐ(f)c*C֣ "Oyz>)n:i9wQ*8)R^ +PTշ߲m;d5em**,'&7Лj;Mݛ& sKc܃eL2&w_- t!-hvkßQ \s} ή=$E]RaIb'$ E:vޅ41)dvQYʹ'cY֍SyϺ8*n!X`w5lE _#%YF%WTA0;< dsJq\N! }:L#9CSϹ #" 0}3k3]>T &=@LfB:g*l+=S= XAvH 2HPldoپ)?H_m݇Ṉ70%/&O}_޽_~2|e{b*md ^>Bi4q}1Z$׹0JSF@8B_1-1Ӕٿ9oT-}DG^&L/s1e诫b%n$O1Ԑet .k>:zGf*{MTNG6j??6#h_ܯ#w-KVݢYJ[hۡ )&j!!X1H$QJql gE k]Xa!&ObeX34\4&.ƂB ua"صA-玚۵7hN>&DXD|]9 Wڤ{>Ytn"H:HU[DI@eyRacx'4ѡne#?ǢGMhTyJߺ&ת>֍=Z{ab:{}rO$&I,9Mlܜg>L8w葨VtJ(x8i T r; NxƦèe>e3xdUw"Ȼ2Ed>|ደLI> )T?1S> Ż{Wظ \a儶Z-GVNhR:דVd@RB)0,7"/ohJt1>>A|%?%ZmVlX&><@l6M1dQDg'qj[KOa t\\ WfKj,ۥ,KjA,T#Ͼ4`w׳f.x0F7N`9 ^Ōl)~q*WfA DHCA-|9$ں QAvʠЕO"UIT%KcޛA1kR9'7&o=Kb%瓥`ۖ h%V 8 v0)IZ29J*PVD`PW^͢͡Ϛ uD`z+R7V MN r[8"Q2b"b\$RE0oŷ92<6BăT! uIN9$w[KzgmvCJR-㫴8Z| ORw[(uyf9Vkm*]BZﮦ1?%~exJ:VHmjggq4(^KVdCK*9 R 0qƐ!euGm_V$) ɁKgx2)[Lh9-K؞15cݺDPV:3q$O.U9KJ17}YQ>L\7uRJ[ikn/X'abnEouc]DR{GZ/!)ʼnP0W09rVc5dF< s,9v[+c8󋺡ruԷx}vۢC)?' 8B A4ЁJbmnsW!2)Q [-eZ8Ôv,Z؜ag9I<;~ݳ]dbdA8u^yKp3\K6xg /ʺ Wi1Rh|PeRBBEn:]TVZLqrH%y&NɱBqip9kqrB< 4>]PTI^8[~ud/ S*k:·hY܏a5=IpӻVo ɉ_N jtR/eG=/ɃO = \p}}H `*A!=*9-!P>غ k~+j{iv6iv49&~+{I׸EZNiv 2g ,s"VVfni ^]^SvhO|2|'cS`dRpܷTTfoiHX )!~m'~_ Q LĴm% 8FE!6 4%p=nڈNVY`#S) 6nw[x>_ UEitTN.uQ>{vZiU I0oq]{V'w#:ߞ,,N{Y#(Q.j >qעw$* 2A4ec f5x@RN)vx[ rqf:0aBiRB͙9xށ:Z{:m&r8*غDTa]S.H%#,7O-Nq& a.k-г!GRPm 2GV8 B\Iڿ㢊HDWTWɜm2o0 ^k JG1 U}30/KGc}ҘL\{MAP ~rVnQe&.(3m >8_8)d 2C ,YJV46pCbu\DB z70Δ, ]V$k:U⒂ HTjͣ3FC`ɣWBjPh&5rv4>C(<&-x WYtɬ %NR0X2@n'hllH ^ڡ_ݮ%짞!|NJ KvaGs čhp}3j nMөD:o `D G=#r6XFkp:goxHPÕhrz|g{Hu.ώtcD׼UBSFM{UB*8n']4HЯ=W&/!cY!Sg6!CB.H1H ]dW46(+׀fz㥯GJr=O_[(i-Ay υ_@?8ǿ|R{Qbr1İ|?rJ)1WUkED{*$կX*yI<:x\@FEg;ZC?:=w_!;mA/\(.H._'qM,6ݎ`@JWw~_sӟ6ӿ_E@+(6q-(l H7#GhZ}D jpCs=U9#R ( m@J8 w2#Gg ^FGL˹=:׻dj$rDtlݤA8Uh$"NEUeVP)d^0@1uh`jS $ؼ$A"9dw'u ]ГܔNJP@%Q"CKںx;hšvi*Fk̓ _t Qsݢ|qSB'dncNpvŠFie$4z)H&&f wJE >\Ȼm^mC c5N0L@AA 4 p P!ԯWg{5Sa0+Mm^HI]Z0 s{b1p8z3ppAۀ 6mgHhlO*y"I:/p>dv&yJ%0y#/YiYM/>+4J )c^-. N`\`\RqA)c;b rMq;XWsiIH8#;<0Z^'k M83Nyq\yӢvLkMo&D]ciīZNyM2ZHfO:R%w]Mדfw 6q$ ]㘔@g-kJ%C%EU.Ahۭh<זO; *8GZwjѺ=@4-}Zx7ipRdzZ=)CBae6e"1&iiwsWMnxop^g 0yڥғ4l@Cl9OK:cJ:bpGb_[ =ɚu@^{q 'E.iQMY9f(H#g[08 .zǨ=haff]:YREUf#4xws@Xq/1c3$ GjOQ$m_(Uvg~a {OQZ'HQ4Lp̷]rOS}>E9Wԩ ?\NYh!-F#=@0f՛2ϾW@ E`>FgWIXMw~e YSFhi-@cTjGH\0kQ>J¿`ؕ߯urIa] 7x(#^7[Mh}=$ U;>͂~ǝ%ђV3DdVGUb%l̿xclf'뫦☷10C !$5Ū̄#%nDku@ۀmK B&;%7 jz?&;7F]N Lo fQ"& 'Lp.z7Y`c[QV ^[gs\+1;؜4dsj1T;B!}7ԷLX(%I.yf`CnPg}QZKU]> 8 Sj U-6,U^U] TQ\,:.O?oxohe`n! |/vgXClvCj􏯧?2'2 #aCnMIV?*·N6{ ?j~b3,o&ov 7I+,w,Vi@Mrp@MyZeYHqE-FH3RFjّ,QtW_ɪPKhc:pv*]Q!1Al#s;W J,2Đw:H UjKQX%P!:¡XJr*1ɔ2)ZWL+' rB}5$j2-ȊX0 3d,Uo#[VZ/8RȾeedA9j0-**eUylEs>_?;Ie@$KdӓzC>3PM1h Xa;5ʩ:'9ΈJXCh+W)1T5A@WjwWy@ZZqc`5 |5lUw2g~>gl YndM!}3i];6NҶA+5yTP$sqg'zaY-üeq,,U,й SqFe}!o|J`[HF65<нR:z0,]X;=[zgVyVƷpH_ O~ kz p8ۏ0^V]JR6%d-t:x4yY`Ow;$/=jGLVNÜfoՆ=鸞n9%֑;{)c$fcۀy$z=ۙݝ [](׷_vf }{;`鋽WޭgM G9U[_O?; ~2o$:b[k!MZ,u3EY3g^YX% u΁,TpW) NujQn ap9A]KM-Y/D ܏_$j=i Ӥv #$jGcM9<8ԮS8BKOp0]7`E*Y!j=*n;n#]֩vF KvLq 33ktjwR=Lzrr2F&΃;տqM42>ST֕5XDSD}VL1J6Ē[)a<Fy5)S'ڙ?vd bӳT>*"HnG(oպTtTlcZA)T`5NQ͊ !o̱-JTo#ԩU&N|dgC1dGIHoGH9G.]IA: y{ĭ_cǼ[e1> ʾBMBNˡPގ\m@Δz5]aX~}pt;ёֶALoXUʉK f7xR'O:/`׋,>8QSgSʼnIl;עvW+ʶ2=LD [=LzIa"'=" бD=Lt;!g;d[^~ѮʵR!s薄SN6z٤Nxfrj1 )Dujw*=*Nâ: (sm=DȝN:یoS'Od5/ 7a\!ԦSNn>gpOpxoTMhAVwjשF|Zwjܩ];c%Q;; d7b#pv}ޑΟ L]9+RAY9`rسm$jG[*LHӤv%.atF(j'xjǞP1N^ WSJ0v #e Q;E=]v־'R{NQ$jGcFs uujשgL$cW ΎK8<  26K!V(AVsWgm*sqH勵w3;/$Ҵg{;.X*X}$hM˝s32C|qxq<쟞/J!,#a^v2mjO~^mN{޿:lLpl= pZ+_q˄݅.?4|OeoWՠqo,{;Kٹ<\v]Oɓp <,RAMn^\2۝m70=b.5g)P@Ue(p0Pu5d&s[ƨt+]b]>SH.A%1J)kt>)PX|A۔Z{Ix*(Np)"VEH^SdDY só6.OɻefSznlnVK?,g&0 3P ԺUdĉW!jS]rb-mU,>V#87I2AFCC73Q+>C m5fLX}AOE\M9CGuǦ裡0Iz!<78 %a'LJh}wAlXh ƆrEa"Ia EBeEZsY\@lѰrvIDi;I^[s *b0D貆BY|~!HU++51giP21sdz\O@fl oK@WG:3YQ ڔ-:φ@/Hr}cҮ3;TɲJif}2=1x:F wh&}(:v! eZf :YItAVaZqkfqBfZa[W)FX$7, kf[.ŸL/*4Jب(ZɥVDXg7PYR}EM=uTz0ǻBLجÜQ~l7t=rO{V>EGH>bY #ǼY7?׆Mgٝe}~̐ՔjI{{G@& jR,䤵o++I\6(FOӄ<-m3C׷_{;ba[K4嫠OCXf5֟x[-BxH^2GipN?SK₨kn{60m"fxw\̗!G>OoO?OϫE<߅#y9, }} }SaڏJF4~wq( ~qg2C<,^)5aٹD&}dvytr^MZ M2^v%|I-UFk!*Zoչ^Ϭ̳2-K=y< XSHYunu#RkՖŝzh><%Ĩ--aIUdNSۅ@UX?"UYp5n] }..†ʩHby-%``n+2#*Q1k4*i[s,EAe }UC(ݚL}$.2Vsq=Es'0Ov≨[{<.ceMh~Ɋ#.";n2rۅ#-0?>u9k>b[ ,jtDuH~)@Mى-wR^ubs\ʃ?<?BWJPH^ H:딶< U,*xT\[}2=l3)7an#ua-lіW):X btrU)]}L5| ŴYUS)Mɳ:mMѧhE3Ap߯@x3 ?ratJ+b_\FыMI'Ls= ^d_dF 62)-FOpL< YL s![%-h)D(2> sX=9%qXtŘTb;1@Mv𖆑woaT[6ڐl-u#F& "/4I{"Z#RFΰ<=i]9iqY*' ,ތiO 4D,fS!)8E bKiV:|"Խ KҶk,\:7W+{.;bݚVCz۬r;_ 8~ɭ-O1,n5`_ÂB0͕uen\F;6lkPymmq[M2J%ה"j˛MvrS\E/`Ƃ}g0\x>K[aqǣ/tNsR׺Kv)JR?@(m;Y9wWcaI(p?;9~ ~q ~˔Z9Ҽ/;RZ衪'hA$HErЮB0bPUUmķ]`gㆁ]7 wvv{X b`7,IC $ -4@ҰHڦ8Zq{+HҔ/.%|7 ]LQ%t䟘< ĤT l_1E&oT5&f +9ɢA}PcołǯSSDTu]ݺ"}yE9IZdbоz:Jw-Kq%IWdm/fߛXY:w8|C:FtfљiDg4l˙ztkR{ Y[c v5rk^O_ꜷ>wvnfbx NɆI&̠#/ТN)74ؐ@2YVm9dx&w ,3#]b%yh-^'jpgb{^rplEj% :G z.Swa 80-@ qv#B!hH[>6P*r92YuY:CLh)B~mǖ~в ?#(96F ER5V |dr~/> ]tJq |ԦƷ"rL%s/٤LWmu!pA )1y}'K3m^̴y1lڼA=r +(f|GpmĔK->h% 9?/`@ع#zfŒnɷQT̲rs 9mc붱;B,|@zgƗޘ8Iӫ>-"H-c5R)ɛ?N4E'X ߞ_fVR>~Zv!u2r)afzJMO}H iz̾ڹuJy,>G\ q\>.X Yq}I6Ľ#%+K]I/ŧADFJn6 7[΅Xa;TȪvJ[jn6RPZr+c^?'euHtY m ⰯdJvbdV2π^z^u}.uRms&}wl]:M !M̐BGHGRу3@ehfP+VmE&,&tme#'QP{\RpzițA1cy)&oK~}kٛ&bb#3٠{ _AxKVX@@F+ juи6$KZ ihѷ({!wp<kO-YfBL5AFJdGKڢ'C* 4KmbMÛ~x~cCm+o,#b<=fzMS?oXݿ5}D& #in{gKo&2('A' KΔ: {+­hB2T'oIv %wt.9E](>[*|~|c#"U7>.vb t/w kw:MܞR_>eJ~ٳczH|ޥyB^\1/Y}lsk']}3w|yq"__Aߥ^oaџKt&ߚApH :椲]WD/ܑ@ -]ҹ6J-9?놨M6MML-5>jdϭ{ާ v-=K n ő^@D9?; a7<2u_=U[l_؛unu6:wv( E ~HsLs&=S-<HE&t8lO٧w\)N\,& Q7h [h>ggaZ5ٗJ: mrNY\O-.zjc.0\o.ђZ2l!8lSæjKڇ R)>͂ޗCdYcĶcԥ\.^y8 wk~) mH޹(RBTvi Vň(_PYwsWqV]v7Pxo<>_Lno_=l4Rk&Ol0gp1~V=p'SIRdw~} 5OFCLcH#|x?FƘ i4 ѻ $KC33r.!I\DSf_ >\՜zsE=нSZ i9BJx:߲^9FE ֖K{[nUJkjF֣3/P4pփA#m|9w6Gdl(p?Zwlή\XOE)1;BߌyQJ=C<٪' /6_w g>;+1Y I4L'DLo/tjo~Ál!pv>9z/eFq]uzߗ6c߉w3^{slܘ?^}Ev:=0'ٵbܗ}KjSKOIKD8ap?es|=y7wBe!eeYbYT^`s:m-C|Ի U~_EӪhB*ӰƊrz9(nUdC٭G^CHOJ%8N]1xVyʣC_PVj}]8?RaN8|39/ubv7uN킲Sp+ޅڭZᛕ}_j!VB+* 7ÜwϪWj+M뻯[OqgOsCoeֹ^Jn}# Gsq7ȢM|U-ׇ0Xn(z6xhH9#\b՜;j!3WV[C C[qljoqk@ yAoS9.{]C#c}ww2NkczN{U@%ZZ?DtV#BaG\К4 O[ir|%+Wgx_ _aZ;5yQ4&s)\-7p^Vmcͻ('MѷJ qx}%:6 KB A<[Z/:AT`@ Uo^&P۶-DLfCNYhHP zԋzK 97gy輶sfmw5;$U?Ojc- {c1Wdm 3V9XrV?B Mcg_{v"E|'K`H)aha!\-CD4X8]hPXnC0f1: ӑFu0@DP/eo$+L:+ ʎa:O KNR$;3(nNZ|HY{?͌,jo 2^- xSLbiNSnM$]8RNl/mj)r[eN/+4pw<='_mm XIvp!"e#)(1`hA `%)bѩ'<2v|s~cЅEFkYFrւ8 tpo&:]Z2.IBzbfiITfM\9+Ko@]U^\#dW\qQB?6Ew'G1-zhȥė!q/>xQ`*zJPV|og!GNUn*.(iE.~+퐖 *J :B}EMg䐼M$FԹf`ed;GM(ڏN`}HD x9LE1ZD@&jZZ7t_ȫTtl *9ye :k xt[K>L0=@i # H"OǖZzFS5!дRL u:Us#" ٻ8r$+H ~ht7u1/;ƲdlIRR3\Jee'.ce61$jm<V#rju^N Hr+ۆdtC.Ґ^NNJ)Pp cI$&ktO۲1\K؍쌁˺y2Fϔɧi_ x$ rN`L6X!TŰoC UW8 dC&#!"I"TcH(C%$58#lXy2Ƌ:D5 g)֪Vi@X!",F{"d>K p% *i38 E\ZBp ")mRJ! #,9+x0JLvW8wbv^DJzL=wz뉺!7_# g' m:F!BP_ d?A@XBS dh T!jF\d!oPpmS,z﹠5CGP3ǦL:<2ifʠE h*NյqKMjb<{)>UE 'F|^zV~i^oa\|H7 j'&o sfRUuXz_ I]3eZYUT }2ktc H[7x n,[΃n(lwgLPASQQD\DtnExIY@ 4K=8mI޵ۍV! 2Lg@ۀe:lEBcC)Y`45iԊ&VpVtL0S.<"3Rr!rUKխ/lೲвEX[wY2f@CH|ƛZ8y<ܔ77]Ҽ}ѿ^p_%iR̫cQ9'Gh?|qC鼏Gv 뵾#~Sػ&mym# n^9Kzu57CBu9w~ -Vɪj+U;~yc.m(d( C. +2'SHTmL&cbfp5T%Y(b__cVJk 5vl1 , :ލB). p/kKٷa3RY۞-Բэ ޯl=ᆌUb{i EYл{S\y-;ջ[ 6}ghV{wCyHց=#bISC(OO b_-M P@2'FKeV\bM~N huX*':ECi^W~>8^7ˡr,h.ʾmfHENixi2/ʞϡ0ѾJwCR?f6?ϫgq+5;?W#_?$9rOY}c.:-<R{WN,X<"‘#^ wigRhXU.lukXA;VM#  䮎Z7im-]ACjcZ5Oi#Jȏ)'Qzw=`dR%q،tԅxD2X8:*sJ ҒoEiN:s5&ƽS}a vl82+ofOk9 Xb9eC8 ޯ^)oW<xO- &ݵR#3:6@5O~$#Tav֬+A38aYΰ[؂'.la[pZ~a Ev-i;b[ 5;jy؆ZXmt@ƺݑ$G73 ]=k39W}|ƶ/U'3$+KlbLmK"MDc9D/wfk4sZъՒ^ͱ[#v&~<-y5v3fzf" \>yv2A-[3  "r۠{yv il# {ЮoݧzZ;Lm-nv>hva9,niHE+5rbYFWZ_#vXF9AGVA;v-n -' r2 e̱ 9@@w [ݨ}Ӣz dqԙK Y)DpT}#Nm/)8jVM09!g@js1H*!Ec.[+9-Vꪘe.FH} 0n^?㧳\xOW=Dv?Wr>;aT8ܒ=:\5{m╊,"Kn.@R)4ʪ-_]j$N BdTBDBAy~? ~5Ggc{ _tӎ0 j8||@o7l麎 `X 2n8J2~M+0z;?Eull\:UcmܚҶ0l2*ZS%sX|cx9`ٮw>_bO?_ |/QAg}Lg|_z1c?ۛd&H`8gG:(ȝEOjz5ـ;jbQZ\6`6yf|ζg%Iexw&3AjaC&>RBưt `cȕІm-pU!467k,gq3lF%td`bhjAԬ+Fe#K eȗv]s LZܓ wbS]l3vJMbB~7(3lFh5mKglg;zKݝ@kx.=λvH[@H^{ _/ڲ'k[ݷyJv+fqf^0Puıӎ4ҙ/lATmྀ̺IdM+v ~#uGaS Gt,}ҋX ;*6ڌjZvT4ɲ6jgsb8X%SmaA3ބdHsf~~on>xU!i¦dgj`I0ZWX(ls}n)h8NΒl%L&&o,-1ƛwWz m^;6/`G !ڊc )v]B0tw &ٶM8y;Rmi] Eoѵ˱ـt2kϠJ/7FeIطvv1}0Rd+'`3P@&v{׊X"ff 5m6o;e{c5{ޮK.r/OrT#}-׃o>0#Hmz^o#vӠbm?4CZzE#lXYe9wȣ{-p؎.׳OW/9rd`_N f2 Nѹi C'30j+zoprߚ[o-ܥ-%rq=Q./BJ-^7zLչ=C;9#V/`cV4wW++{ˈYZ-ܘó?u*Zg7]Txv~X?n1q{:9QQ"2Z//13MUR?VFEyu`8[mV46;Dfy"82CɄr@EyOEy۟r7=F*ʻYkOzF?l4k.uٻ72=vt5d Z~\Px@>GtkW㶮5Oǎor͏x0i9:PRmI*`m,Qw)e=D]hd mۄ f)vN6v '/ΕT $zۥع;G5|Ư&z⟧Gἲ=| kS&{#Cf>h[.?\5O'(0ʳ+{޶$JO;wGa0,;Xl2NI$9j!ڢdRlj @lMvWUi3gz42:m@[k5Q)+-u֌)iЇ%5$hJ7RMC$CT-a 6һY3nnmz݄}=B_9$X,;=P Q+4l!?937觞(v S|xg@yֻ"T٣XrXNr4:(ɉ ET|$ D-y3ANhhs v:j: tx}-TaX|6$s]GXPv%(ϊ2Yف^sǞu C c,b6hu+KMi~_mt*GcOnKL T/ւl9#qn5;ҰqiʊgA1׻/"~49$Lp,Oc䌷!~{:Bs|stڛUE8?X-Jխ+92Ҋԛ5Ϧ*<2+ =`ad!0wab a׼(k'|>Tݎ[nBc(!)]-Nb]O΅ ЗRι9pW,,Q78-;to+m= bNUY" @0B'LW`\INa#X\dM18{3O=H'֝3`\ZYSŋ*@7givApeN@ăR):ػT2$=lN=0;ɱg7{h`TY=ځ|ZuT;E' ɪvYͪ6T1,cl.H4+esހi@VjbɪWRܫ4īUңDڬen#v1{=yZM]ޢ'_crwx}ы߮BS/'ѿ\?vn%.ڢ:_JY8xG \o}zqsNx\Ot)&??3]m8{?ϏJL`:tn(oc }<R:6 0M]xMQ,*Ė"{Z=1߶ V9bfujϹAι/{^4_4i w=ϻ-u){7N>'߃ce*_1o꺐! y[uOϫJ2o+,6\h"kFbrM JpFKʊDwkR]ђl$kɷEC[hFI֒ޜXǛWAm ϫwwDFmW2"%KU<!"C|Q\RoM (tTo謯֥j¤4JWNEJj;d"S/(1TȨw~k\Ģ^VXJ0C`MVz VAkc啲4E+ UM,VV5U5Q:8:Hk^6:+0[1:f3)W3'6H3 JmocKc"ѻ[wx*F@quA2=ӡ%{l[{w {uSDK0ފUq;wVVhܦ^R*A8?zҚC5pD˫({[?(/n~Hs~9O<9_NNsץX҇;Obh!ft/s[S4sf3$1z%qKׁ3=e'P7iÁֳ7Dv߂R2rU9wԟbUk֟ hJv5`4*,RzX6%j<{l)ք8Ju'ݯlG`'+ ) ~U=Q鲹0ޒi>~I%-W{[gM~>8u.M=G!ki3)#Z'GBQOA"9̃lhѢP6693wr<mtN>̿{Seq^RX4^|Q!nj[Z H5]6`+hVZ8ĠAbȕT^%H2EWȲ}j8}d(vύP :5":݈#1bVG =L@'YxD+^KJH/кcm@כ2T >W l9c{ L^דKZD#Wv6a?킱VybÖ0M&;"٣J}84,㎬ȁͩ9UtS>wsM9Ԩ}!HAH~k.M̜k;܆%Sj7.ny|a^T Nt,X]zZCa!Qk ]<> &E6&4lψZg͈E&e5Lp=HbCfFg5!L-xB[I[k.PT[:kFPv85[A%&bC4!CQ|`j.]EnUa;v o;(7EirLJnB>5΄Gry Gp|#ə9]'3zș993[@Cj4]FR1J{W͚gLj ӇfPm@rB>vM,O!6+/:ml$\sm[u&-)eKza,US=<"}XYƊg+V\|fM0|WEwJq3B4pI᱑B|VUˡ1T,y gv>58'zޕU\;Pu\D? <pܔ+۵hCd/:%gW+V jھ=y%=/L7F0asϯK`=/5KӢC\k" Tk@BS6R;q`S_ڞze_{lo|!;`9^;6ч#S_H x6ܓ^g<8G t SV@e`d|Hךj(mWhT .Hl֦LY9weE'vE|;;J-̴2ESsYm]UB\[xx-Fl"ezr*}5T,/ɖJZlˊkdkߝvxE1=f'{#\L/{^|3:^T_ }3*eD]] O5ntV6ֵ"/o+)XM>~Wrǹ_D:qK q?!dZgN>~Ԓ|E"<%aHZոE.VS`5d2K䐹C42ȵΘ9ik?G̅ί|t@ⰈeDERKu7E=<8Mb{ݔLyEG~=Ua[uI{8jzEI'7,U44"LĻ^Z`zVJ#c tS-f-/$wcnVl3;6>v5ד(5J?Ujq/W؁qTמ')̳LSu-RU!kZ'.^q秨T]F+ ;-4#fpǙSezIĂ;䇿χ{bZ)f6k&Cg711bh1Hd!|{ը)8ka'߸O(s4T1pfQkr,2h&'d=diG<(eW.Сx'p6,*ÀVNIG<#:$R9IVs,%-t mz4ǂ(_@9V(/k!GD5ΒDu@fÁr*s6FcAcqZMڳV(0GϺ l+涹$ck.괴A 9!\r+ٮ)vp.x`FEX fLZP0t,}aSk.t`p l۽LqM/'=.@ṕ}fG*Y+mRZ'X@5©D3{˥BB*2iЫQH `l#( Ɔ!j.Vcdnذ\WӃqt<-P],613-Q6s"D#/''e#B+ẻˌIIGgzΒ䌂]D~6˩GD.>Z0Uiz=|AR(QvqE.RsPbf5=dl-90fh*a^Vpb>^x,&sj^-E[[jBՆ Z`Xbsiӿfs^0P^ EojZPݳTTKXЈ|M]֬WR҄zTX\3nvʡ@q[.ٹ@WX*P܉@w;6xp4Ń7%`;+q88qˉhl'.P^q}ʘe;8qb}dޥ"͋in7O1g?WܮI`Rw럮\LƤFCRf%KpQPo)W-!IaVSs9%J`(L÷WqٍrP+tШKZȨdR)VPE0ǵ6 N 7zwCӪ/_zp;8/=9dx' <>jISwю◞|(2AH>ܓ.*]$9dkyxR+;MıyV"9)-#oj0Ζ_~lG_Ov&W8hkmCC݈qFoe2ٮ1`Ex#G=: mL*;[rk +%^H]>xT2zgh'n"x{UdfTJʎ.C BFz>%"9q^ؤrEzc1cS=kǦTFkBؼgjkc}<[q%Vз7Eo$bDCt\>Pg,G|PdƉ._%E2&$%B:p4GU2|JIuzi9qx5Zoh/(Z.io"ߌ{C^0"aR^rnMB(%T:eу)C(xl1I,Xf+f9>}}ek?Q[:.g7uTbnry7E4|]Y[??{2evy^Y<|4RrYq%u \T5fi2H B6}݆4; 57ƜpxRma- LD$/-d =uѨD4SvԳ#L{YcԱ ? ^FaI7w +z."+ ^Oާcj&C!.QxA]}Q2 v!a, *x$j.DeQBg%AӘ(C˂I&xm5ФLF}uI40z`%hIq K MUA8q*YsDSkWܨWi+?ܺXX:+h+͊xXg׎r ca)>T[c+oFd:~Q;e<\|k}Cˣ%#:CpƢ*B0A!8cXߤH2n,qD鑬 i%6j׽II1ēɚPZ*Y^q97*y4gG0m>. ub 4H,Q_ f=hh활LO u.(>IF\ Ys>s(A[(R<%}.QR+ dp&keܢcKG7Z9P G J^T&j!d^\6HS,L\//7 c,@A ,Q4+%{wޜR[=_x%IqǂPjBQR0On +f%8i| u55(UGM/ JkNQnCE2:P`=KP`b-y=ӮFy+O,cOH: mdn6'D#Ffs?oˏbҽ  EO'(sn^u/*#T|N&=@,%UpPR*(\&qPWT'a( ]uQ=wjWQyBR{ ?ͪ9+z;ixƱ?`؝,mUڔ: ܧ*Z̧x*f^<-Ѩ (A7A11a2~xo߾?oy]8?gUw' `}6J}lHU^M6ZF )1ਘu}'FXhUR$ A0/ͩ˄JG&=+>d2X &M4zsJ {%fޥُ=5c ӏRϳriyRqS{UDMK~vfn qQB\p Z_pnj-m;\ \Nҿ}[NC L(pb^m0{&QQ>%ﭐ5YDds@9>y0d.E BhN>PT\lp2k%+Au@f(B JaKg+v2G+Q1qId]B>Y19B3I`zd Ckӵń4uF0C@I!B)eɂTHkn+t+ r4_RS:TfdԹዄ tEEKjw=nL!Lkxav y&NJuEҌ[ѺF}I jȧu`E0oK8[pj9x1Ic$CW".T H^ D?D3ؠs*Nx80XvN Ky1!#иkH*]8璵\%@ +>ǹJ`'Co\d]k*Uę>?gWἮ`R|3}L@jcM*:PzkuhgrNX' VIW`"E 9YX *gYZe9>lYV" I\h`U穅-7-HSM=ƹU7"{dU׆&2m pWyH{ԉJ*mG*bjyD1$ \<׍?/CC ¹n7uIC{k/ja-V;6O5E&AhJ+8Es\#jq㙂a"5t強c~(~(dfL 6!4e}6RYs+I-?/ģÿ`鵉XoȠ<$U摝IJx8FִxZc u X"j)ݡ@!O|0eH$Ɏ֞MU&[1k*ѩAKH0>oAߏ*רoH ~| )82C*1$R/DjŐJ:o#TĎ B3.CZkV`#P#nf#cNJ8弰+)cb½іņQ 3rX 8%LoӂåmTk {.:qG|;gEn 4ȿI 6 wc*"o/:kzI/co'A0*@3vaKU~ՀU[Dxc Vhzsd'=X*]W>Z~̹Jb)cbT G: QD%Ï$3Ͻ2"Ik̐_7?UN,B ]mW OLa0Nl'ʄ–))#%;3ِ kT*@u1C /C n˥P;3pY5xèawx@yKM E, Xb3 I,qXL<ʂbF( +@d;vD#JV:9!*5/H3kgTrE 7H]N!)b!a9V`J&Vb㐷 <(D'WN&"?F$2_Vdϸ M{34!7w*i[kw3TeBKڸV[V\ dII; 4e,FEB$b1X16:&6kP{[\nQTEHH*ƒIpQ6^ ȇ{$:ׇb@ia _rMּڳ(BX$_6c@PdR*g6gFk%kO# %nt/z'sn/UCģg>f*A3&|qiw@qhQ]U4DI7B};,? |G`n*mL䠉JJ1v.H;s=!ƒ` v.W]Rs\!9 };z v?WC,iPwvåY>#;Ìk̃rv܍3lJ#< M"'i8^1"ʅ֥ɬ"7.[^0͂ʌ:do%bl}޴n Z3ar{5'.rmrg|N9mftٖ &JSF2Zt9^ƻR\Ajn7ӭEnokm` J*ڡ=Gs+ɌԔuЩ+` 첊 4qSzx,L<ݪjJ`΅kqѕ~=HyLr$+H㩵N9 w, s^U'/y]I)=V`R +)Ka-GpwUTcJS}H$VZaOMB',qD9BKD( U8\do*1}bbR 󝌔+?b`&m?P3rq>)C>4UXo5Rɽdx<# kޤ 4LW?p4 d.73E0B֣c3.ȦM, &'S7VXM &n̐Ϙ}wdc7ӐHn yta?i-Of]?ϾwoOog.=?;/)p˧; 7cx \ҙz]&5N=/} lqqtG ,+='ܐ{%:%=>wMezOH,p6N/>G&M O?ގ' . -2LR1=,r_hdo/=`',"R6 kpx|,߰{^FC[ oV6!wOrJ]v}~f$vٮ?z1`aK'1e49"ͩbEò5"]_ЖLgiKuQwL4'5ۼoO72mGy#ТG/pZ\YٹZb[jÔͦ"wx}kO xM[f2 to@P$JioFВ0s-4X L&mݓL#N[.-7+h9;?AqpN'0m͞|ۏSe|h/*1mxZ dYQJCf[%ВbPcRO*[;]HYiP0ic0͒(1L$s$ml"8fwoH[io#`)dP|ۢR汇*Xmok#. ͕[4Oj(@ˬ@J414? 7ͣb>gTL2}"nB%16Zc|h%:;pG_}#;:y452Bw@?{ǭJ/ԩVRJlV"Ɏo'ΩEόĞ])Ulkzɏh AM jPq=$hploiWXGܷFMtaz.@jYGɹ{^>5@Eze<-t(/wT3X@o+Ƈ g茎n?`M>ƷoGM &Ul5w+ Nv4 ǭN{K Ut%G.9rEX+#%{qI. 1Xۘ8Um8~riqIgHuN\v.9N- yY]}'0S~8!Ҙ3ȑEeoF9ȑE֧¦m>S &181AKLbFe~ լ:U3WWpy@'~k`&0q^kAڏkq8Zhc=k'o٠m^P2}6}ǽ*/_xDFMic,g;nk)<_!.1dTid#L~Lrddҿ-F5k':D<Gut>=1&鼦1}Idե/(eGN *$ zjtf@ yYR}CͿ_~eUOsr[ü<|\|P h"H])VώlOt,;9)C>s~h>~rŚ5(A"iHM6(7P%g'QT%)[0% j?kNV/aXuCMs.q<)&%5QR (hoK=on@F/a8;8x^P_?#t)ʱLe z`ou+g2j_ BLGςIDm&P'jYlD9 "?Ft8X] 6`pUD rCl‹UPrjU apFAYT_\+pp9<{*ϵM;PPD}6*eWbPXJ]~E<io",xvU;F, 43Z*,,{"Om{{Տoz.5حg]:?`{Bbׂq,EidlM#3/d8̭V ro'ݓyxb߻$FU%F]D=zE15zHAZm/ǡr9-9.2273O"z&RKQ`9 {ш"F4 :'`QE߄A(qգxDQ0Gį~n[_O?׫h-q6f0b/Nφg="^_Bx~ui6>ݚxT|F(O:;nW޾{)M%_(-Hw^/P\pmlWKF |ߴi0Ɛ +IjqIđBD1h:o L4[ħsEJP3ZPwRXk@kvvȊAVirVkN0WtXMcZi`vnqhD2xLM$mjE/Uԃe]v"wU_}QCz4 ɀыcYRtN 9K$42U) pXA>5M5 jC-Gm 3P9w` W(QhU`zhжqĸQ9+b:tZ9#v~v^nNҡ{jxJكDEҸyV|:L/~̬~?]^zoq7;u.dԯ$ )1KNsq%67ޛ ~ot4_@21ǶϦ'e|w'}_5Ak欹оC7kalELJ{d3V)tth*EM1ث [ՒQuyO֕5o;ۿZwe\~nZ8y=|Ysz8onfV"g7drh.ywqnڸWq#v?퐝8^%~N/OnrYq_RUTx1r;.ڞA?0s|o3uϠ'7N?~f}5Q|PJTYt.KpE7M).bx)9AOyƆ"k 6mIRʠ7,<=d^{y8׆^{"w6Xܞ^T s0^^ڲ7XmMY5X8b-بg^3 %@‡d,~ߓ\frwjI$SGV" KtF">*5B%V1c W pԱ2^YKfoc}6?Cf'˷2y <^䵈~73zeLJV TO`e:Sxܮ묟m&W5r2̤{t2._Lъ'wbmEGb\$zT]\ƞ=(PHϒff(S\HcYI[Ux (M*1 0MwI;Szf+<n*Sם}:w)|6Nou{d Z}0hWൻ?d}'KN]_ޭD抣GF(g\A(zg/jhkzWnu_?l̓!`h&g}({j[x?덂$4/ʭ}uåه7'߾y5{N GH期/JX<{U*zV\RsyԦr)vF6j4 /D *8` m?$8x| j+ݛyҥ^ֽX/0q8q߱@z9ԴZݼ V*ТIڌz]׻7ure"5vgO?]7aEGѵU8=td>?yIrO- 5qeו;7􃜀GctLcgeR+%o[JIO6"[2w2SehXnnp2qxdAѢ!K&El1U52`Tp'gtk#z0Xnc$1O b&(N\- kL,K#6z%6K'Ʀ|N%j%qEA@/.>\ ŷw+UYu(ЀOWNb$e4"e6 yh Ĥ%8jjxObd4Alh\h0CȲ6O&(acS1.!*ĘVD53GnbI0t47rG5ǭ\ыً聹V^Jo%,؈fZ?X-REhoiK^ jPHG3K: ;҇9@k4)Ũ%)gL#~WdMb1ym lC,֨ :3C&FLcDI[r դ=Asi QsYg*c"Ι62UYD u6&2:W޼9X}k(P4cGz_VY$;Lb&.9# =h^/\բnF?KJiL>*Ji4`MOJ Ԙe#gs6xI^y6~mG1oܡ2:yJ\kky(eٵ6^DSkyϖQ<'9 `dޮ1&Bj64݉~`eNPe^X'SqY 5[IL?{x%.g@׏2N$M]b_H<%:yOMbEBDCNAybA0}clw F(NWi && 4"{ps"Ů1 XlqPcp`? %'4J#_ &ZF`6s#5eHj8EQGحX2$8iv!(z4bb(K "ɀ`dGzx =&*?ãCnȎM+s ,=\kk-~Qbom< Zߺo[:uE*j}J6@WoB2U[/c{wٳĝcaǗCɂR2[dOfVw`ٳFvzE- avQuSZb8HmK'u)qʐ[tO yApY E O`IDň[oںN>Dzm g2=A WpOBs@|· 㛄O֧-/ MAzS~ޔ^x:[+ֶ}4"q۽gZ6";!\8ٽ.Fg?FwQtƎՔFCi!)>=G=UzVN +/|;ޞ>&1]x:1qUφ|UHQ t#h/DX}b9CmðѦKFm*$h:r'^,gcHқ C)at1Z$R{JUF*i| țD18i& SH)_{Уexiez|h̵lTF 'Miggv}zGnZ* _j>1Ffڂ@ /.B~I} -%@`s#HQy6:1ERlTGN+Қ{X~sŷ:oNNDKpGJ}u Da‡?F;.Si_G3יV a/ d_ȞVᓃ >߃dJxV"{#>(Ϸ޼¬w۴egD%skXK_VE.A)O H2ŰjP cz}ū!"̏Mձ 'SE&b1 *Dw"T! =jAѶyD=m#G(!l'^]OV`l%1aH3!c#)<|ٙT^z䕣ƆsPVNƚq ~ƙ86&au*̫p[-|܎܍wE~/f}G iOBpUDױ3̩LP!,3Fx8;Y+h:_{ j#p_Wt[`'1JQ#E *#c_'y/~L4^_-ffHv%$V;|(H M p8mMxN[kBYk'-2:A0O,%2=@ Sf]),K#[dEhBNp63[Zv8Sܥ4ke.)\4>lתS縙*xp:Q(Nx:*Ei@]N PՀPO@km)lMɤi;M: UA-[_Cj4t}KRQY=G)煘=SbPoPHF*k]dcGD;GD#Gaʜ_M/ͶSߛU4[{yXoІ:Q<4(FCa9(}̸oo圿isXqPxEHZBr [%e["\ԃyLמAw^Le$2˭#af2R1DP]v񫟫¬E*+X= &~Yר,w>[-Q^>E^FI~X7CucCu$7ڽCXU )F2CC uݜ5 Ct B Q l2 ˰[ )na%qyvH7 X_͛h}s*<$fO[~{p[82cȏO3Z=5ȿDfX<K0dɗ!H#0D rEX6{mn<ӥ_/ï9WҲhv;g "1xd|c\)}t$cLLh¥Ō%&^%GF3N&R$Gb&-e97)""w[}_m4u:rB kk:ܨ>'>ooH^?}|aTk1j"_.ȯ^iMc QQɽ Z*%HXE9P KlU`cvBcZ \ ,pXLv`dv!&'11R""B;ȀMVHv$OZ(M kI ȡZXCӊx,T1* Ek'R>y:sHQHoOOe6V2eܶȳeKovO0Mq **c $<A8ߏ7)2h{|or>yGtؠ*_+:,חQ X2sYm1p^ϯ{){xQNu:ӛǭTըS'1S=qbO씈E)^Z*c{P+ojq٥mޭs5&1_5Ѐ/ΛP@7mM}o9xkedQfW*rfk(oL%|VlLXF,rx? D?fSLSѹ" M\G*&y0g35mz[Fve1d{9m8=3fQiY8 I?h]g5B0tam8攠ܥWl{#&m fuGn|Đ=n( bNpܮ_G1%:B?yva~[u L[Xzז~5}?qC⋫?Og@$-Tp':OΦocw󣠬 ܏ϪoXV}Ð8QXʞ8 Ct0j7P TN1WʞJ{ؗ,2ꃯL'lչY_3z -jc Me{} }m:vnB|ց==sfD`׆pU>^fhBD}H?>r'^#81鞻@/#?=3eXD`vBՑ #a=Wh{~۹m>^*r,Om$'ѡ|CZ8=VևW"+2 L# (Q&hV j$ҵrKar4"[,jYywA [5ʎKܕ4_qhbu|Q~)^_^3j9Z6a{u!IÇ4*d(h#V_np4:V7|Ql0IܒFyۢjW5(9EOD9=BKn57XL& cu,ޠXqtT׺,3q(x˹W55K$^5${NBm<8miI$x/0{IuTV &OMiZXL.]_;=ƛߚo~Y!*O7fbr9BFgz@e~qk]]F ʓ׈w!Ɇe.5C(S,gJz0$1n'o՝B LUMdXT L RV~rmnZ3P-ϮX뙃ޙc@aP`B֜0;q)("L(u46HڐrvQjhmQָSWqRąLoM/S ?sev"$ R"aFaʬK<_%-yŝRi.UYK!ѭbk tH)#Ob+xم]]N u) Q̜Vf9&u%.a0ŕÂP%Tlؒ~'E%w|n qv{t_s/gAjy?|=8J+p"S{v+?vcV ZiT]8V $(Ցot7§om!um"R׆SW\W\,ӳ~eWӪ#z@)2$3k  fnv?} W'hH:x=AٻL_DcbtE",b3nv_v"Jb l-K{P:Po`%(\2Od(*?{uZJa>LjJ[MFk( 1*ߧ>a9̠`q/7 LյTEX3nN| >Ljyn~՜5IGhO&^[*ΟfQSRNRB!ss D#?IgGrOo9>NsG;A&_797I>W!DSͫo^}_,]<88^ןF|mWw3>W?4o3ݙiŦUx_FHB\D'TĎHj#9MHd"f%)wZI`{K&Bs3Z-W~d ξ%fWj1 lsۘ '@X5 `ƌ!1eu'*]njkݓ'E_Mz|Mp5T9~ *`]-Ş!}6@ܹoKԈJn)[畮QSq#,VF"IEʒD"}\CDkō RtM)̙|`۽#ӣWWb|Fym}_*a{]tc] <TjTwzP!mLncL2MK6KA2)7%#PR (KsRv $t}BQ2\'I>xxBISRW4v:D 4&&5juh"lMMhA@)HQHi^ @KA{뎽8DV/);NFPG,qJ"Qq Lwxa>[xIYզР]*9 k=Wrq]psWIaȯhTH8_D@ߛd㭂c hNѱnǤu/'֌eCQ(O!s03bpYmNwB>ҁ:.)K_, P*K8Xr`i ;~ai^sgC2}g[5qa{[/Q2:QZs $[FhH0#DH/DHBwMGhZB oMU(”SM}L, } -\ȉ"O(74~~2(-w9IOrhb6Lge#Ug-!f&ŬF`T9f5RHwt^]=tH:\Es貉xw-l N\e#uH|C ´r=MUVA:u8Fx4&=EI e,Kjǎ,J_FTQ8پₘ~)R5筰J zȌvs: )-8ݢV6*,+ͧIu/[+/a?B 9X;h8!}ZG̀HxZΓ^2Z`[Lo ΨH"i0d̤̰ tHR1:ceV Wm$@*UtZMB~cI(:n*-v^ _voYˌeRWeZ1KIFܜ& Vq#pE̯\pZ>| TbݰU6 ]d݌Vgxv#0T*G}蠙 6of$l `Z|H5S/'HT(HmjZH5S>"u @ZE-R ;(&2쮬vWDm`\J17h'̼~qze X0Y}{Õ<_Vk&05xD`]P̭@ Rqb_~@A+YeL<58C|F>)DjSN.=_OE/@-UAtAԍT$gBDg\u(D9DMj5#ZP z+SŠb:abB |wUc|Ds`qw_%HCE~8l][B3*royUBE"o&qȽVM! y+rhA?:"'B '}N[oٙ`i$# Ŋ'$,ES#()UbI[ML!S[@JL ,2pZ$q;FRY I%K1NjW|5g`'U])$s Pʤ! CٕeBfe2JaRZOˣPGJuX9k˾}cGw,C|C#^I(RScQYfƒAB3+ɲ,َ%S}!d߸+%I82.VZX'#w>QIGQ"/Dk#>F!5HSΤEL(oRQ, K ǻ3 tMi059܌p/|Bv o0Li~a$f&*hy4].R7唖%5dG-͂XNn-n}DFJ̍(n[y,شh]j^lJ!>.R۝-ۥ.]1.t_9S(7z4}|FLm6ȝ%x_0~\x{Ɇou&yzSWlR\-@%,F#v(A[RE>x_¡q!Z<|42Ng$ $':376 [\89(}dlA K4TTŐp[o\TK[iMh)+My#|=gInm-E9 (q4-&`U e~*{4$O<SLC1ԑG3Mt2xʩ7ĝ~72y^:~]poy&3C%r>/r^u67uAO'RBs;_:.S@3wc .".w]Q)ʪ7˗Ŷ`!uAfo^Y#ڏ_ZC'~l+s $Ws7b6'3 i@mfB!{b]Hv1&yo1ڞ@YMzY#%\_8{?I/G5[Oֺ(Y k'{ZelV`vUJ2|҆|>/dzDٶ7ip =ȮmHUb!nb"vc2ovg0~̲6<0׼iWD`\9|xoJ O߃-Я5PUwVnE ufnܯyk9f6yL cgMIwT$kr.&vhciّjCiw7W3xb)ȋ&f7uF:k" h%D#y:{}4 OD1~>+iˆ@\̶}km-f<g&KUdx6mbzl <Nqe"ﭶ[Ы0'mHS(g) b8ca(l{7LCLmVUhx&$ w_N Sg HW=!{6mc*k$!GVƐv$+Hkoh}V>MP^P/K]S*8A]9I?騺|?m3P.}|$ԑՕ}1gvЀ nl#6.%DuܐDx{w+P} "5*~5]^^xO}Zp>?^hG[[f97~h6$fy>bJ3I{H&x1ǚkVq75viA昩@)ъ[Mi[{ar[q{F\󵸉&J$x>ͷ])eog@]r8>Uk`&S7*P6Y~${;oyɦ{y<- \pג;L ??T8eP ȝT!ODW\sр 'B>'Ox[9p=m|Q@J%='leUt$qn`턞4tTFyeg\H|uj'=G|VqT%x7P)J]q=ʣh.1f'FObzk?gI Vď,8LYT\S8k޽>Ř#$)JC˵ZD^h!TФhy3O?R<MMrNܭ,"HOT<ݥY k;c_*U 7o@/#6L}SՂAŵ]fcH7CLnC%eζQ ~u,T/M[kɒ^%w\(:a8E/o/h6^-"XaW/vqfLe^b#aWus[xֶQ-EޓmW}z܇(boQ&/AITg3J,RPW<1s?3rp/1 š^ܒ@Fcj!TI& r)vȨu2Hͺ 6=WJ+ @T `{!7"_- t~gGGmjwy$0`|t MϖrxVM\$ڊrVTg/;˄4 7X n9pXYCx-2w#rI$gYHu8¤|mPX]@d%IՅ &rqc5>I&0DYWu tmfSGޕ]?‚[*sLfNh * ERhehCx3 cCpbX+?nN2[.Tx=0\\N?5j6hBD;h|sGNUL!y[Bƥˢ%@L.doI+=!֖&=S4\uMSjW_ *T1.1hôb_wpu;Pz޼$MLɹP`IA%";bCLF>㤄O<3V̠r'2|B{x,rSR-#2tѺ`a=6g e3͢}$4gֽ?c3-C=6-lZntn ºE/k214dnF YX8AG@K6%ʚ@-!@]C5I10tv\zl~':xZ dJ[P4n[ zo-A{eh:€3Ůvwd: XpOBR-1 g8vV5 gW0+ 8+QlʜxipNÄb@ (ZxAK; :A1` #fj FuF*BL-TޭSx.}0ؠfD 47M@w3Ώ@|xjD"IsoY\@߅np{Ix'ߙHv95 eSy39D.jB_sCNJ1PJbI&,[;yػ0N5KIKMfqg]٭|.u-TK"_mB NSgl*&3K~c#-24+i*F0$\ sI #bgFf![*@a>%XkX=pwPv fljoTBp5Z2gě7giŷ^,?ϵDw͇W^,wbgZ+;u^?R[ ۊOX󞢕6 7_,ų[3K?S/ʳٍN\ԋNenM:wz̘ |EsS/}DY$;}.7_*h*JN >*˘c<քYٸ/OKU 0}5(*?M[V`si`j=uS߆m(wN0ϖ waa$K-D1cc+>'_N~zqPw=Y_ܻx-O.g+μ5 wq*w=vy d%?V^)yz7N u˫uHKx};g%= 0o܃Ժ1H_vG,RP].6lAPJػ:e-ƴmVگOŧ0z2`a\,E*`p!Qq1u6".)[|_m=[qTii]lp]sMB{%+qG.ּB# ĮY'];G;7]\3am- E!f/U)$ jk8Y'4ʢE1IL)Bc90o@]9!Ĥݗ&X@JN~Otܿz{aB@lv ˧۹|Id {7@3A-}=ɰU+@h$]>m=No$DZ6.)DaUYwa9}qk ?./o׺_~+jG,ݪ[}f?|j & ]-' lIa'J,BU5Xn.}1W@yP%FXIo2IA})RɠwX9X Q ?GR5[ipu%ӔV*|S """fV&pcF'ϴ-h>E9U. ZmHrԻ I0qbwsm>4}.n'5&CB:\k^P('!eQXpG I.!>ZBDAc4!(DžZ7@ .ȝqă9]qGϱJ0OmA[ Sx.<#d40~[6Nsگ^~z=5 6ܢۛ8glWzI̯ h A#0)޿-xfy.tK~^يoө~Xm?mgr#h8Ѭ^^sMɊ1 z#(1Ķcb2^ X $J>aƀ#{ ItЯ 1LX~܋7*YPWz:i#6.NtMI p5랲R(j THjxb|2j7 70[ 62+lX.)d }a}H.}f-7R# GbjIz/Zٺ`(Q0bv-@K6*s>ɗ2,fBoe@ {Ou>_߿=;L5CKknN:=v9@XH~%O /- w#Ƒc>ԘA.ey^LKHb`VC3|~:T xZ6bSX}!^Bw>`}nJ7$p{=x:\27Ql1C@ M;Cvw,ulRu@#fu8"Jq˷zq5C0e6A 2376@B[j^hB #!>L'Gy SXQYS5ztlF%dQ(!{V;NnIux!MC?Kv1Oէ'9!boGarOIj|$9--i!@s߇fuZ]zi"6=rO*gV4!=Ct{_:o̖t?9 xl*ŸV2A.Ơȳt9xĞ9{?'zcggޖPeTbGLrު,9@`1WNV"йv~0*r8YjLAb5 >+Bz J,]hrЕļ&|B ;W%GHmfIP9&X,8r=7DZ+Uh p'u`M2$-BNTz4R*.=㙭zpEDgBPp џdR2ĈPA%kL+_iMc.m5B,侄 $h<m@5AC"86amӈbT,5:_5-#8< "p׵׻C8 Kd '0! P.( 5#}15*< {ZÊP D &z@rXsӀ`xfs/+Tv/_m s~iDAEPSꄦ]V>=K۹/υ[.H "q*"@bԃ9U8 \X_Zy'(C c`,ʺ&]4KEj੩lboZiOSY.;*$L>V5U.U.$-WO~fJD%V[67\{Kb1PNႻ[i; Rvefb.JZr1}7o7`$j)!lbtl*fչvzAiNS-aIAUQ3&a2T8:]9Z) [.BG >ez8xHh?\Q N=hKԳq垭{w,xnEd[B`t7zjk=ŚOyDKp]#Г.|AcԹroDKy̠QW4n}(PdzM!3xV=4b`zwmH_Mmx:{ssKI/x@5eWK:%R-P)VQ/E`մez}RKZ}ّvB%BfєTJxvGRuLDYI#Tx"1%rCǷDgj%NL.3g8"%DC )C&o((ڠ PjמfXL&Obe1zѮX p`YGd0H&B&):vI: J ie`4MXF2#,CY ,3O܎H҄iMQT)K@t0[Zcl)c@=saRLۦErY[.E=sU]'w5ZʰW,k+VO,*\Enu_KXR#)ZUJ&\ɛ6\>'o7:}hO!d_^cKEafטKI5GʬYxxl̨`ѩ֌c9&d}\sdcqS$}Lf4Gtf(t7HNT3F"$ 5pGqK[kbCJ(N.dAis&Hw}&KhR-m/jӡ;=ٌkt wF+?j1t窶}!JhD!j]= R#^nXOTlC NDk,E-/!rұs L4V`Ҷ7%1Ȍ&Re N7bț#Ϩ82ℑ&%G<(!Vǣ @,X)U%|kF%EFT٥#*}d`-2ڱ6kC'k2ʔkѼG_ȇ.\邮@SU5ex쵭 1,=0_qey. Xhg.S\E<.8db67KvO=x3xMLʇ/lx'Tb(0I0 .y3$Цkk뉝/ظh5z|hDS. I%4Փ7 ^3\*(T 㭳V$cgFcdWߵNU>wbwJEeQa+fQKQfS\ܳZ=e~q~@xsprpp<?ޡ,v(סPpV0&IzkpbF~qǺN&dHːʭw{+{vkta%1K7dꮛL!ہ}}/s9~_12۫=`ޙ8ͭ>+v$=]O/YKEXgnNmﺇ)}P{׏&bIÉ7&ښc{ILuY͖nw6d>vyhR 7sI/Fb-\)ZtE'U^{*=HbwM*p Qfjxs6fpAm k<9n_ì;0`k6e&| Z ;Gq4^sɕrѽw-ܐnFkipW?i/.6,R]d0]͠{W1GJ͜![۹5Nv [eaB㽽6?p*KPhhU tz*sB،v[%L 2Pl+nfvOq͐q9H.D}g"Ip)ܟ@D]Wq |CT (nqע2=Q:C2iEӈ(d@ՌP[Kׁ\ U(,1&0o3/ dELYDT$4" q$Ku.2GU C<'<dSU#ALN]ӲǷɝ҇Izg^5c|H2uɊ6jr&T n:7-82V;^H {HoKINQ:<b0)@xG>)x!?k)R '.),~*(vWh);䐯x[, a"u hf99CDm>EC8 E ["{:mj@,}l#C2#ka 7JrMgeDZUӂl &'al!c/ $=L|T :vٌlYbxUt UriXK5kΎ8v2,33w KGΪoڄo'P1,߃p>kJǻrf{6}|, ͏ĀFckAtz9>%EwkeS=qmYb~N`ſyzO]Rb_Qta_>Y+cg,SYmѻmjXV(,ڨDid^ Kw&dUNKP=*I7.>2K}"3kYߓ f5T*\|7i$Gk#WYsŖd?3#@r9.!$QbpkM , gCFeK4 KW0~]#@HRr]J dK[4}gzOj/vTY[t Ԓu6eic?O~c]Ad~|9g~֞ݖ/01 AOL*(CĄ'DD QL3OR24Hw9&YI N@MY3v #7izx3=Xꮛ]wdM,6l&3c9E437"aɸ8qRM_=x e TEQjw:E,53B(Tq)LHna q/#XmR*<6f. ƓXc rKʌˮ7ĸphch$4֊wX3R<1Ep&T,XJ(KbH껞xmLj85IbX`ZYvz|OUI Mlw!?cτOZ+ ye.~z2%n+7aY+rу6~x"Xd}?﯆I3B>IOap)gfڮy\pX+Vx>m5f 5jz>狡m|1rxDng"T`uX LJh9*.?NgxvRJV= [Tv|vsߣpxY 3JѰ&JWٸ Hű2XTn~٫@ߍfy_~ג q WRuFGF^WWcȫ9,7$RgmI\JA餶ƻ1L%̻% rX7nI6O?n n:hq00vK4:v`!߸v)-о%5NEYbP[)d{ٓK|m-vׇ=|˸C-cް/RG /f, -JK;Wts{$*䛠aOQ0i7\9T&v'Ty=LuLh E/Eȋjғ؏Z5}0ǤoΝ{Z{hpZ ܹ 'w+OnJGT .JN*mNLKuH{IՃ.{SāF Kl˽9ymol+4˂ڧ̼Z-LkWpnR #~]$qCJ5'^S$E(Y QQmypGjTA[ |&ݦ8ix+aGp@"^Ńn̼qIA#K,xj:%g^0E.LiRaUڎT5u DS\RYaRb'KK! 9%FJ^irj3^Q1|[`!SB-F4Y0 θe !h#1%&+ƘťV)VW lB!,s#*.IJrوT}kO}\($Bxoz^j:hCL; 5+Jr|E0 q/.x*|F~DMf6$6$AUaITl%7N3 ]Mwv@"ɸg$PQY! 2|~Drg`jOhF\4Վcysēb-O]Pap5?>b'oZ"B5v莿7"q%(CVҦa:?eZ<3E5 ,no$A.8E_B%uź/7Di2xR+[E۩3Ҿm(:MV ,6r)JVR^mhFݚE%+"v忼[? N9oT_T"lQz=̲q! Bft86/7S|Z{pCIqrӷ!:  ! n<76^!=7Q%}ˀ8V=tQ40yb y/"\?uRRn ,+>K[F6Qb|Jj^>=>>DǧGqw7O ELJCj]67*S)mk!B'qt[ʑ(|׍^1"+^@Jo X N^Z(2 49(mbvw\xoIB&vLSVԺ2T?9 (*PUQn7hM c_%ͤ`f:(U:o0S2iu$DJV),|BD6y_5 j(EoF ny᧟|sFo/\]a"/(Ս3M{Pd4'j;gg6AɊ4dջ穀Im ^<)`:~^4a T%7%z"!K7/;77{4!BFb 0QV&-̄Vy)3Z8H:޺RthrBA5HQ,6$Qn,$YA$AR!X4 kdDedDKE麺Mr'19hLN<#e4~ϗ7<02?g'R8B22WRԀXaE*"ͧpw|kKrXGa <ơDp縿3(XA~"T!#^:9sܗ sNoyO>HYR*{93D!bY `h8T<2^yN&= L z.rI+-sj4HNS1ɿԽ7:.`>yF7s}'-,8ŭ WޠQmAMr6[(r~QOV{:Xu._`ӂI! Xbl}Ũsϼ0BcC"HIeSe.TU@4j,ޠߌ?<]R=JmC}yqE!\uuk`1o\_ѸJn|:TYK$CK^g^h1 xn-ZhAD:yD k(C2C8W[[Vty>1ӯN֙O )E/\59@Nv9ຟgj7Cx"N!@C0S\ &ejhq;fK'* ATr$ V{X#:o-+-C׋_/7\":NjF՘5ό -)gFbp' 'xKDW~n^yW#RQ/b8m#EQ)N1xڬpLJn\BFa)t+auD aL\<ǼR,RhZdq&pX`&7'V]Ԗ+ױrˌ>BFx'7x5B$BkƒXDNsdX|Zdi5@L5B)-"]z^*_0&[0A@mc l7>204Fi30,!80[֍v%-l כc[oҰ՚\*0ku$)y<7$=);݃(MN̒4+~FayEQi0QXWzd4aJfIeqsKI#[Dʎ*VSSnFU('q9P)Lc%K 6d~kNT()GNvY GCיm}2F\Lz7GT5>i;껳h ~\90bOO'1hF=Qad7> N%ޤUa_Ҋ>īM K:26 *F=ij_Ufv1mUOruSV潗+j X(L%J[tKIW*N-N ͹o/h[V!FDڀ?_?$^_l!wnTo=c OnJ+YjQuwMݶJhftVBms+tW:#ln/Cma5nw#H]V[mxBQ(v6F $äF-*]9Q4a'i;F;VRL>赎쵾D`2\F 0 lGRvo]j( ,kYVMo!ַɕ g?v? BI- 4?xyЦfw!A7=EO]B~oGaz;!.^j?Laui)=Wr?]Ge@QxUuRL9N˽ÔJ&X{96n3rFfjz5 WҬXl ,1ɴ4$QLݷ#JXb`ic29_k&`a_ӈlLc;~XVcj۝=VҘ9nOrCv,۝tzA *My\W/*#`)X#1cME( $3{s )aϰ"B$"!@]<] cUfQ@dL|%+><yCs>'k|:XuӚ<%5`a+aQhm)eJMK둇V2+i0u4ojCI8ƭBηWd\^- &|l{slH%"+qDZc^9Jr$WF#AUq]Yo#G+B?> b;;}o#OmQIFTYbVe]-6$*YEdd!:dk aDQm" ŵt /*jb6 K-H;=5caz]±,5Mjm ȭۼ-4Ӻ;q b'N}Tda 4pRv;a4_2޷y ,ٺW쏋 I2t͜9$2Xo)W,PF4Xc/Zrj5F BFL"Q( aʍ@g cQ|Zm`j9\/WͧrY"&2uQLp/&)=KU1؝*!$ tQ2'`3&+I}dn"N'qEHaĤ=ȕVk56K$F)Fx5{8tFU2 +M#h Tj@o`4H+wV`ՆKg5!Py(SG a$hyQMhOjf2JYYe[ #*Fɽs1z׀@"R45)``yxDD* zTFIFn{w "m_x0W3m+fS wCAti%TXe4c9m,׆񒪧`b.:i&tB?H7^Z_l/6DVn=ӑ#_x{DN`,fp]k"+Na6 ΤFj!%rz=w9pkI7M"'Lw_ܼ߰D`/E3(t_o"ubHPL tE<:5{cyP^?^x5 R;X5 ^SǟOl_=p*j=&B5eeR }<UmZٝ,O@,85Q*u/ŵTH3'QvĨ1B^Ĩej+FŹ^HJ*#)ʂ]H~݁=H@%H53y,F9؝b4 H Z;_)c!FŰ GxC C]8FDQ C@;OJ(lU"ʰOrRa4jzm'[ r0oxǬ!b9^k+OVh0|߸'/voܓ5qO=5jTyV&5}4j`5d7ڍV J &lcDBL5~Ni(,nd4]8Q+Nx#CIIq*Jk'm6|{5\,7O_Cߙ9߮yKLtUo95¬`0tYs -l̯js]E;4/s]C% @$4+T>Fa/pM~ëgܫ/ Zǭvd>G칕5x҈ڡcQuv,j7zt.'cXTdqѵq2&(i']#/}]>%`zNVtql(`Q/D^ (AA訞;8؃#˧7Ynh{7wfl~>U6~ކԏ$m>S}&R`ƟIU3ќp>540FY CYU0qgƷa߬0kGKb,Յ- inxǜ%[32$FLKbQa?G`Mi~=-\inxnj%;Չ1<ʎhɺx(Oc /Q!h%>B>(XMO a  cJɰ$PtQ9YNdEG=I4,5`njA0!Rv0]T󧰼dT Ղ/!˔d{[N?ӊK1M;x:Ugi$v;l* I?"D TUc3ϵ'RQɝ ^E0\yϵ#"RzK+4sSʍ*xpEL!.F3.`fRR&&ݡ'2Q9qs\@ dJ4. T%]T)f'Ls)%TXekVJ8}ԦLLLܭBR ܹDR4ȃ &ކTX#C1j9VVr.u/^J,GޙjrU&( "PhC)';O,P 8 JP`UiaDy5vȠy%V^8a(FPv~o/sd#ߘWo>՛˅}ĸ˩o.<=ռpym|`o]|n9OaZמ&u$g,eD)f/W"{T^J9fߧb~S_z ,~V 9SFSl})Bǁ%L>lVI?KJLNzDAy׫OuPΎp1i'%1&fjK-µD8ߒ֪z4E#\eWO@ep*9T_ξM@m:!+ o럷=qzyvmS8ͯ޼:_3.WnV%s083YOSK?ZaRqu} _fWa:1D@U/R֡ .Ǖi#zZ˜oXs9Okȋz+Y,jxg(vbwP%9G  RtJau!ר'/ ;WN+\yNWd==M[CoW|A=u /{@Gn(  $5:Ħ|HbGvu`}\l.PwY)^XPE<Zz-t_r bb$Z{%r)zr|ح7o(,T,Kk%4[/a/כ$a|p@v:v,5҆*xgPPJfg~9p.W{zmD{7,g^2kOent:9KsRc֐+5> &CvvP# f +ZZCNE8xx"`NQŠy++yeY g dڱEdXs3i9ӝ$_{e1_dFM!y }ǬMk~3:?bbbf ;4x 8*t"u|2lt>y_1Kxͤ<ݘ٢Ě}'WhB%gt&\;ly=3}e6`\=;$zKYrqVeY&AI`/@J T±hL+F=7>JXi<~3$B0XS-d\)՚ժK=$]]ܮww(RiGϢ7Vo_n\zߞ懤X;]^zp6ik7Ok w;ml?(}ǎ4ouXo?f$/Tֲ2t 䡠;&3auW^HaܮUͭ>fƿݶ(,/vb[io? wVw\53u Mƻ] ^Jdm׮;Sr*0yg] gz9_!2kIeFf!ab1w^l kI=}#T*UղnK,fe|q_EKOؠJ*]-qPY?e: ;j[Ӗ?!Xօr=SfH"ft9e> LsIjH xrvc%j)^4mqգFūlĚ])J=5r9iqּk*4K5j8~m_Ge -kj_ ѻ.Dh%{3m 0C=jx M`ۋ 5ن]!itVxM\9ɥ'^j9,,&#ClIőQ#6QW4;Q!am0*{SgX5$Ar%ĞXO.!Ԏ*I$2Ԥdᣖ e,}LB.o0&ւA΂SYҶۂEޤdh(5%IB{Ƈت-K 5n f')h%]TC% -^0(UJ̅1kyL($V#VV"4r4P RyQ2nQ@̝AR`3`)#茠!Q2"C Y"ddmpb` {f1R=ou0<;ÊhK!Ӄx?]lyQA_t~|F")y|#&O^Mq>*ѴnO!'ௌwG L;0B@)t_`#-&`Mc] G!(r4,(5V nf0F*TM[P/ gn)\| CkAtCeƠm .^ 6 ɢ",R\C)V-%ťWj1G\ '-`sw]w?Cԙ~{5ەxF?W1\[)unjxnnnG5j*ˆRҌTn@(,.WNuT%xc?@|F/gPRH c*NlghƔ}幔e;6 fiv3N9\`>| ~^?;@X$md蘰QqEʾ B+pFD D9t]`^ln`=6Cn*m/pe/xFQUYfni EٲV#R Z >AE4e\)k[ Uc"眢7!)XXm"'ec9Q (2YLdbi;hHvpQ۠H"rǩ \W'$&rA&oYǵxYXmƒkG(Ђ)Ň#-%P(Vlrh T͌-5ՁTDv՘a0՗7<>2+%̩ƧoߧG3;U===>O,#u! o p';0<̡́bpf5~*Sri`G??ź__p"g6ްlRO˻iQ1%7wwJi.M:G䴎q>i[xa) jjҴCK\YS1yJfwMFD?4 m PR&V]jh3;%;%hJACTXR(vol:=n{}@ @O$4Q BV;ӀF kpFFZk<unȀVy[RɳԡKcʻK )+0tus=>f"O )B=@Rg H!-8[|<:=9>G'F[s<)Dpp;!!x 8IJ7yo]@6R3VHJȞcS 96TFǡ S;m`J\m2m yxQ#^h] |K-.[xl ^[lTkUf%#7%ԁ$aǬ^ gP]g% WՎSw<k0ߺom41 Rlf 3gA@K-B8Aӧ]j35[3@\\^KJNs.+}V$.s-1J8`M op!uڇR t= /P6 +y˦/ֲ7q>b>f6hbl̅ $&wkm}'uTg\K0C"Ge 9(m [ cUj`ԏ= ګN̕B^l큦b+G^,}Zzџc'՟wV^~&7\glb jϦ GOa\SF%  FL@kfǝ՚m!hq-ݓf(W^]ܜRPlXhWJz}֥`S.*!+H_%^!ZV.CtKro\^.ߜ"C}t}s7npq]ӪdLtWՉS'nWynD)Qi `Qd9ΉqG0\V&IJNBdhreߖ]E_zDzuQ9Ϣ3XgQ.e2A2F`&2 %$d[JEߝ5H*D. %TKRUd))fR,Z i3'h5d.$H"K d4'2p xƔ+B_^R~r6k[8=CIDyH &&աٷaD#u _];]` f%DXInS*K&,sL {z'v@gS)cl]M ;b>:%xNR"!0"9 L:E2KBr=V vYXzxf9'+%X'$^&/no?jNӥ"~<$y=ڏnr^D]ceUÛCa;WȜ[Ti|+~K^\n?Zc(+e3(Jk]J9˒e +mE4m'Г$/"Y&+3إRXF/2UE~Yr0a{=p^tvEFY&ZnWd\ޫ:~(8\ӲjmEno 9Kh}Zdu3cbXOVkR/JGR:| #nq 'sS <̹F btf*TlnhښyNm͢ K!+'6-44pga2s ovn@#i\ls;?%qjg.Ŗ"6e8jUl 5ԍ!HK֐gZ9tu12k|ޕad}p !mK _{U2Mj׆̻цL8Mv]3%/nN Ɠ^>IޥohZ% |UY]guJw!m5חSZտR.-i[NZ^lڼ Lױ_@;ЇFNQR"W_;IYc yW9϶Zc\Y?Kr^,C/hqHW G{ksA;Z 9rT0V+eȕ%{Oo3̐]VTbGUg&vj3y`%o2 jzϼrİvy9&Zv55#) Iɱx$،##-SHK$@.jNLt }jd :ʩ LV|sjI^svn闟AxI&3|L+rs;&HᜃąC;F "rG {KC?rn$FRq."w RFY(T1$'ZA,{?F){E6(Q5c )S ON2" (|L'?#{tC5z"9z@`Հ +v>IN(kY[Ԭ$+y3+[X>\ESd+9 ÓakilHa J"ڽR,O꩔^)LQ{MJ4T@RXD8,e U|z0poTk*(\&j(Z)8\j*Y) Yy`T_߯R(--_pH\N[eͼJPըVPh Ec*F *tUu8Iík?s䗧v#Y}sLYn R vמBW9;fZʖK׏ 輪+ҎӪ|s^gEBU>?ŤJ''j%@Tsɐ@7B'`W긨?~&>9{fCv9d0TU,jGVU>= |ֿ> AW 8Uf_ *_rUh[L2Nt7tsr1\%8ZZ/LA{*/ ~#nEA/D}<b )ʞOO'B1Dd"Ot07M9D@ RLg|"{QnѦO4}2xrpC g5df%tzC߮}[Ad,ڰGF#?rFQvI.RTڦvS@-wK\=ﮇ_<~zV^g~*c~<3mM59 ?u { S"5FYiE*-VŎAn0#xtlD+[HFambBHjG_0l066SESl+nL@2^u\%~LLq0[Lq%Aligɦ$iM [.`2MeR߂1mM?w׫(Z~UK~Ry,Ȁ'}٭Kj1`Y\~L:X"r{Ldv6M!_&s%=+QJ;dį E4D@ X7psMXP N;X9ݗDn᳒9[hLe}6:Pĺaф&[ D'[}I"ƙݥ[5֭ E4Hk־uc ZP N;Xg,S]u _iʐ\DCdcY}I+[(yD'E[+ifd-xy[hL ղ=Fmnwn 5ZneHW.A2eWP -<"֭HDknkʐ\Dd ٳnVnwn R݂׺!!_Թ{kܥojɼ[&5Qlޭi-\Br )[wך~gTM2ir c]ejcJ~~eQV&H!:w^R19>w ;NNw;UJiu+.3ţ] 24A!ٹZ lW25As4Z*H>_-*VK%s5M+kZ RPU e֯ JIU֯ 3ֽ5W24aܽ5)U֯ R)D֯ ֽ5E8WR5H޽5+kZ%t+kZʚ&+kZyZ(Q9wAXwGpgqhCnĬvqN[%gY[isPKSl# ď\X)WEj{v/R˅۩Ԃ3?T=Ɛh#ELHPg(k+HhHl ;4j1ۻKvu|DoRxLgO˿xf.OWu=ݻ&6{I@Da.x]Dxs+ P[fiF-u.2r!S"-8G& rSbPBH3)|$*D\8#LZp6Ea)3$KIZx pH ȸ Bao aiǑT1bHW29P.rBқL%݇ c-鄦Xo<$gjE1%Ed>E dv'ȶ |M#-jނ χIu\nvהbw|yvxu ʭѶZ<>Odf~97 Y (N^׭n13x:976Rˮt7za|^ǐ&jH#G/?l揋j?ţBy0YL׀L}C}K /<^|;j8l{Cgn`L6]erc0H`r]fmbtfaGxMݗeFP^6N?Mj`85/0A#HJ܈E"l!G*._J';ZW3:悓|\prqh~NS597n~!.f 0O%nQ1'OLϼ7[w޵qcٿ"̬^ibb0;#/ |ښÑN<,%fJVHsyyw |{ΝgǍUS7hH63^Ԕ +@L0j͇H%'>f ii("JY4R&x*pSn-1̢yB#w`PE9msFF]L"34+`]~<6qv5s}BJ*n$"gpauNG"^_]Pf?4n8pHysN x6ӟjM|?Ï֪xnaƓ/Kgz ?]ӋxVh%.534H 3b{Qܪjkc5z+Pv1޿p= m1J ~ r`),!<>ɋ߮i `H% ܠE:owISF d W9e1 MM.nތ0?'+Bov!\ߍO13jMhYGc-[Z Aڡ;]:^)j0g9RIx> ځa"*+ \J4] 2^V.#޼HZ^CV,ks_V;Ѓe[:(QژʴϺ\G4Iܰ.>AR㽷gZlYx^V&j9x8 g7#2GAJ)@G| +E:Z`^U\5AႨZ J2jrATOT.)*Lمw qWTBBn/SL ,P ?]3jJ۟PN=6"fU"?tVBڶ )G@4^ݷ{d( ݐào_]^]>22v3^v7A"TY d5N񞻎T\`Mʉ.)^袙ƭqKZ83Vęȅ\m\ї} w^8$F3&K{eekEk$f)823ӓfȜI=>wsf_N7WAuQߴH1OR>%%5,,?%w`'ڬ0װrGJ3 5c-veQTTT._=?bƯ mw79}uQ{}E9vy5\?/r-:f򷷾ۂME-{~n[Z+|8AMه|l7|5o/?lࠃ$3gMɐT_y] e+<&<(gkZzAg"7q~zy;ۗ]&d, z-x(a:{8ExV=%JWܳ^[>~jyHqݺq7 +@x _(Iԣ?HqyTYrlH#p0H|Fʹk\yGv#* .d8ϓHύe9qR `We Od}w_-g6Wc^#lJߏm ymL f%O/z6Yv5ۋ_]n^4WԹ@gA7s#PEXm t(Bڱj60`P~웝M%T|ې\rڙzB);4!uc-Ř<î% Qú\"6^!E䬩8 Ac[ G OoZwi'9-嬴 󥜴ҧQ7;Q@8iB6(J#h7s>eHuFDcԁsh|Bx:2|=sh&96;v:?thQ<γ;o3DVyA9)Z#tRYe0dS3BrM&[sS쌪mUt@([[͒mEo:as0;QXzBxS(!d. ߒyY:פ~?CDdT !^hœM(puECT4Y#=<1djW"MGJ>uQjX&Hfn#ȲaETʠtX!lD -̡A2Rx)6pCz>,bX9fhG { 1G*nw@mcNjeӢz|Û/I$iIua=Kx_݆$P~|_o?}p ^ߍӉQF niw^fĝJζ'o;07^s\2njw~ۛdd |h>XtXvijUҟw'EӕS N27߼p|4z;=e|: F̣?(PPtrQ2\9#ñHR"2QZT2ZgApoœauGOOzAJ{NS"ʗߞS)ud*.YIӺ;]‘2؃VǠ%D2 >2ΠHR&p (CDD_Nwo_jH`stM[|T}|hH8jиj Wf^oESxϦ]ݣHd?gſ;\^6N/S+pKsps$n*c}G~''Ϫ8Lna0'=%N?~xL4B?WFEn]wѽp'y y2'C`XnLDQ+eѶysf1C;G+.WPUѽƷ,@ted-NB};ljkOWR <%`Gl%)" ۘ{fyD0J`pܦ Y2VmS;iز H.2a*IHH&(|\`Z'M{`BHS6+Ac9+Z6gw1?cy[Nd7)EJ< p}!Y QZDbgyΙVtąĘZMA{RHؤV3^WWb"lPX`.#l.!_ -BΚʛ󳛫^~/GemkW sA{5q2'V_߿6Sg)e` Z>%:H\HS똷4\Műhq$n!]Z'*6cl}*29Nlȩ`C\Ps\gXwqFwC=YE2NhJxcHaW㥫;ZwMg6}:vmir#хЅ OXOI\rW8ֆNpB1(mȝ\5crpσxfwgeޢ< $Og&/nԷqzs1{~{z3Aofnf?eo?[:s]B&>Ξ BMٻ=C3=W僪>I]U@G@iH$ɉזDV9FC #,\pս9; 8ddP)!LItL|<c)[\(01!$9$Kyh<eDA#7FcM-bz9x%.`sMnn.1Vk ͱ8 QR+޻6G11L.Ĉ(}2;߯1b OimODi<̾R`_%/5v+zs½*h8J 'Uw(C> /Ch9ׄnD©H3lJd'jHhmV PR?{OƑ_!eÀ?~"FI$YT(*CI^b{f([Fuu]]]7=Ά ;;َi_ o:9H7S}%ڲ ˶D FE6DFM &˴Q-[@ɓ3mq{$X&udz#+Ɔ7hiKn{'뙶FMXW#J'8L9q&QB so8 %XFZJɃlӨ4 #Gcٮn҈U{̫FuwY%RZm_wZĶXO~[|5]gRzkۣTO|JhA%gf"uF^Q!jo)+4;ٻomP < ~ nGN=41\P4L~^w80~M@׿`&ԆɳO`gQ)f'ꇽZ ٣Cֳ;/V?Zz5)`G!2V]> |Szwv:QΈCv4,u2A3&5dET{0VIΥy 5O=zI7a.NYt25;,ON1>с[v~E5j7BvG'f#"j.琲L2gP(}`3 үP4Tp3QZRq|1?cr FK_r\Ĩjp"Uψ\6mlFi%X& hsā6 SVf=s/!Ƭxs_+1 xT @B3O]WSR.-+- "pCVe`$&x&Ժir7P).pc>^7 4wpUpT߅o=!OV./[7,_sv Wv\]hů?|1N]OnA{Ϋ7,Hm6{0ړKçx`GJ!Av|oLc . ƙ*6--ėOڙoŸ KuSζI J8Ո^Wg DT@,uԜdnK&@ /6c{sbr8"Gqeݕ>x"nRt@`/V6%a c.s1j>f.)֦!6H$h8s1tO|+is##e$D!:X M8\@RwsɈ/i.OV zx3x?M!~- w7UgC n?jv6k> F>{e#N"tT$2黑b:dۯm*mFiKZ 3N#%L6xl>K$ՆZ#1'܅vcP^ƻqmdF±QJm}@J1gAP4 JU*Dq R^Y\RJ15EhsK|))D5!YDf >4~ MKD$6 SW* *=FS9MJ!(h F]tu[eJB\EKA(ElJ䔻j)DV8;AJTF@4FMMX+JPYz8PIdHd>LHvUg"{ݱI?g vula2݆tTHߨ:Q\z `I"ϝt=X: vInXU`L,rHF'Yg\l:bm6*o./-*JLSù%\c[c1*A4) $oE"iQ^Sq [n2ăߑ%ѣNnHV4< {sy"(M0Oo~dƃ>Qe&%S;1W1osLj0K.A~+a1 [ro[Ik00A EGUJ]pc!!17?]'9skrxAw 3~̍[aPJ)^zHeL9\xӸjG@>mRXкRKeM@ T+yTTQ/3I ">v\lWS8(ƣ6\Il[lcV 95Z|:<jDMQCiQ%+,֔,*`ᯗWDB 0Sd+KD!%2P*LR2Yi%ޜ=T"j%Vjs0S v}Ãѐ"'Mޚ~r/Q@(s}Q_.hvrZl)KZȗ8|9dwFwvtzDk2əNGeI+R`/~Pr`f樂OZ"Pk~ n%)Xkd/,`dJwͭ6Nmw.n%>SU* af::̇9 YO<3 1@OYL4fkݾ\|uZK-Ľ pɇJ 0kNj-W}`*l]ŃxuU?ت_X`RD}*%S P3j˲!*i'UC|3w˝INh=Bb]e|DR;Sr#Ԁ3:HlK+ukG>p!/< }ԂMx7RǪ{t)~DBޯCMe:{20)pC7x-_>Kk Hm]l?0mP8 [XRz,Oi9ŒZ#KGa% K9LQD) k?}sr3KThZimka@F IûE_٭@Cxo<]7xd1; ϱ*X\Ƌ 1߬\;a XC"hcmpLqxw_ΜjSZ#lg)JC!QVJjsvv&ngIm3pI^qt5v#D/LR< Y?OJ**ɫ$+a*#S*&q%9jD uYDHޒ& '1gS/a_ٺpVX ZX|т.`+B1E0-$(F < /0C<-h&B !S= s6Ei{=SJc;i:]dI'i\O[DɒC줺K.IxH`Z MAx=Ia˽R KJ(mf[xfk66>2dB(q޳007q#_֍\9d6oԈmjzʂ`,z ʿk>:I'.@])Fvn:{@ufPdRAÃGy \Lk#7ND4['b1vMRx|* *@gɓ6L6TGU5JWWڔohh(> ,c]K,G_ΏWC=_[) g]>g*6Vu׮N1RA9;C[kK土R]A*DX-&7X]Nn: OcM *QTB4GC[URlP(J}êgS"2lV 2 70/D ,MJi!F%712QJ/7a}Cx>{tjA&BWbfo.y.iJnn'uYwuawKGvܗ&uN'g ~9_dN-ȧJ{ ŵ+Ԇ~]?釐pbFr2 '1€~,%`w 2 # qhM#Vs hW%|o{B'd,0-ud%{^ {o{GZw8|~ @M9XF`4+m]'筽/v|gë룳7'W'gony8p6q%tp6 H[_?uGMg4}v uo=A/ ,t%߽<OZjkkfKd >;_6 wQjv^wfF:ꆝGHƽ{iZkP/Ę+3g]+ -[Mmfr6!h{o7ފdʟ? /Ov2bF3,^ydze:eo\tw'Na{1w]] ΍X&O6L]혻IS)ޥ,lj0w(dk+a%1@w|位3 O;g74w4N{hÎzʶNv?&CS` ?go/Wvzq8s": p]9gWWERWg kh2b (4Yz9fFten6>`/iq9\E{3o!ջ<~6V.?OeO8ت.k<2gL?`k:,Y} \^e~ pk=0»w/JQ Z@Y ۦa,AWf2?kqB9KGS3'IcpEhݻopQ`U(۱coaK1 GM &Eg1GXiy *Pl!\ƌ 2֔[*BjLjQboMl<A4CJUտo/m5süyyNr ޼ op?kvo2<".7nNu2h؉n/z{`ՉeD.bڜ}; ~ev ~8d};O~Ĺ9>=;{@R&?ݛ&#d0gyk@_8'n1' n BÞwTo{ŭnΧph>UDtq.q#I%ÊƯ*xqP֝߼Ͼ7?gXcHH 3w{]/֔0ZG?##T*X-e,֮.4 "ģ $n62֪㽊M4:n}gW} K|S e+dݜ(]Ma_.Vim8e,8_ 5j9C[қ JyF9h!Hq]5ޟ1qsU &ZY9w?.zMP]S\Kҹ&[i)dJZ3[sk)-mH]?ַji!DI@qmsHPl)JVDNba4'r<;$}8%cNFޤ JĒi+Šr-,FjŁg9OY Tͪ꺪kNfpnϚB9iV 睙({sY_ ̗{~}˓W3_wURUK*2/Y&e h3YU`8h.`)i%q ),$R10Vq&V5ØgmUT(b@`Z$(NdVΑjWk>t1.Y&'5nx_ڴX8$07s1l012 D,H0AT$Hl51 ^eǰ☋D&"!C!BT 9R"B6(Ra( <">ؒR\ fW"XɛšQl۾̿{x" iE(*Zj"exơT(!Q$6Rk P{"'3BelU'0:8»ȰuT81PDEq 805 2 İuul1tw𪱵=v+6wƹkQD;F7&h7pH54ۜCK։hJYZ[aIz֢C!Cy/@~T~v h ճVaIx6$F8"J!P  Ȣ^L#~Ik2&>@9[@).*L=;Znޥ(;GIO׀h$/QE#'c|˜_찓|GY[s|M7$#2:AI8LGqp K쯌By(g&k^I͠|g)wM e_9FZH_;K&V !Z/vi[Tؖ> @KyFo} ^54K-ڲ$I^b*A(nԘz_3J {l8O؜ǣݷ?.2^Jge3Q> !P/ڥk v+̰h' E 6b0 a8,sD,VKQbtWM|qӋgw 1PFC\[ȏ}AYN\{]Jnn _,#˄/El"D3k*8՛ ^lh`eɑŒNMUCNcĨMc qdpa"Bw!:11.~Gyb$f9LHFFbc,e. yb-G< #XMƬ(Danclk!Y(N gqZ&y )D$0 1L7q9ݵHUR^<ƻ:ĜbL[- cԃ'DxG`ea)AH1+tM2wIiM%njvBr2cWMLqU!o:dX !ً)2zBN5RXq#@1JiB##nivdM5q)၅4H EZ,FA|(̍c.S*4S5ihJZR=JLAl>41)؇ b9,3^žȥty02OyL5DŽX IIT*-hKIbK @%(a|#ʈЪ(,7DڄcNaPn?RtX&߿MQxqJM$٫/SWxt4t̻mn4x{jO&_н;a7 |[x}~surxzy}نɻv0?{F8 vqjp9\Lr '3b`ٱ&<֘-nɒV?$k&b]3H=gFzl'?|zzrj~_}߫dsxf{YNKzf9VyBWgգ㳏|>zoNNjn0Yn;e'$#F$ bPhE4ɈUΥ1)f6Fw|9yݻ=h˫#0)'~i ,7+6bkk-Ō@`" ƆX#YlX[Kij4ai~owGQ2YjɰIaf3"C;Jb$SP@GI|~5zlȢWvFO2m,m٠sMHf R+E $1E+?,~ɷfk63׃i督pw_b-OguKF4)m0%Fw}6T~ab7+?G2ɅI>Yb-tdByf,Ail.]δNmBxFh`,V2(" Ktyc<^J.`B^g:0Q!AfҴ{VQٺgN\#✖{즨Jk–w(L)HRiJwf/(uĠJ8,TXJt)0Jcֵ+dI_>}Jh݉),kmN8ҙ$5sT 6n@Fg_͇{bjhbrjI:dab)K# 0V\e"*WN/Mp4-h(b8q[?Q)Tg !]W^Y<>C4rk'AZǥ>k6ByҌ5]`/`\oqoD;nN (&jA{܊CN8 l~b{@tC'ɸռrW^@&if::jƩ"| ~H-(8]V .Vrcs](p?/f_jvJ,m3X)NmIb44Ԟz?%[̍RfL,XHTԕḊ/B*"KX"h8t*IO1WgXwx$?,elο#T[i\f(n3殓"6^^!Wr!;~p{mkJbvip(|{r6–L-57쿴h Űٲ>І ;+ehω(5%6称glx5tnHPr,a<&lgf Gі ԋ#im m-Z^n୸ ML4e53 ,6ĂmYm,\aBfApM .naө>Xqө©Vz!5= "F4eҸJop֨:PO|q4 S8^!4 ArPmyAR'Aj|> {uB&*zK=9#`w$'|b'fksN#ϋYdY,GJAV$ ;HJ-2<3,Pt8Ό?|Պ*1W_;.iGm2jǗXnU.&u1>Iݫ½ԝy!C]5}X{RERo^2*)ԇΤ41XH.(E c }mhj#R73jiPjbV\+D*&*Ie:mc@S7%5 s.f?LOOu@$L|*M pДeR\dxΙNOrKrBlɮwIiŽRkt4ՕJ-4)GE}:yϮ(cƊ)϶V9Uam|'Mivh{R٭;z /x_{;ӣQj_*]=uiÓ(.ZliY & Όb [hQ"qF(IU$7k GtDH(XIA\Atʊ7t`Zz؋\_(JsZȑlӣ%"B:&2v=ڀOQlSHj&HlRȽ71 Lu8jOIXkLq2zn 4IdYG,4&V8&Þ*}1zkֆiI{8ޭ_ sbrq]%{ -f_V лzkOlن:V]H5Ţ*Z#8ڊJ>)Vb%h:lQłjKP##S)"qdRCWXG˅YhX@K]-'ujIAqM9a3^nt$yi5*⣛bV {Zu# pa!~4T4X<s[`Ub槦Q%x\Hh.,T7EDI@$l{7 M|*`{ 'kEڃ 4V"dAl% 13؃XB=F1RDiSYTXfAZ5YKD )!JNYÜVckRhdraz=yyXB.ddJZ$\z$Ou$9FB缘$[.jw`Dl5 +05AIK$ LtӮL`.%w2zY턧ډ.BX!%$ĨډW  Ƅf0%L1̝s̟;g9Eg':A4i8M`0f T%Fd +uBC^VTj˄vH?^ZϝzZԏĦ$R)ckpa8D fPBA"+I  NlU+U+qvFoG x1Ti|} ֲNr)o#}>YU%tHمţ(7z=)AK,c_ȃyJ2lm /H"D5v6=B\/E1ᜥRGV Ew4L&|DmZS`#Z{Tjd^yŐ5)D00رØZHYBHY0sՁҒ]^O(ϖ(P9 9g+ϸḦ́;WUsN]ϝյ7C < kR?ʡw2%e*MLccGVX%,URܬjKI tWo.V/_+LFɣN合ghQlbH"fh((5FJpo^r+J|{16-2 Ӛ:-*ݤcq`hR,20Q 0>J)äJe c"1SGIog:%:u<q{ԭ^6;U|yPPgvEC :pt=/9| ?Ozv ;};hgpSr=w,v={=r5A8ܭf^;n!DrERۣ&%V 4Dc̶HgM2A^ Mבqj lkKSkhsa #x,#Ȥ4~0XdtʢDX2fR/KzԵ˒ @& y^>U~-(g.gχ͓=?φ{mBGwG2I!T>W7V=CZK6KoK& =+$OO-29V,zkOZۚu?޴ꊭ\N l9E|DL(j;B""EMXIݢ.JPl̐YcCѮ2V!@Q)Ev-D#a*L=,cՅQ]B V&91Z@ yiw)nQÈlq+N7ă^s\%\8;dzKQwg3wqbwxC`A X tQA`Q%Mm T+*v}\wb'5$u&О <TII;,FX2b 0' !i7уC=@ /0:)'Phv綬K3gЉdڻqߦnHΞ:'x`;酙滿ŔͿ) |r7/@.zDVOP*ES,R蚘ԤR5Sl1s^E1XbЦ"8<0n{:3=Kwl?7^&Ɂ@u4A{zz="L_8[Ӥ(Je?r~r;hJ< LyW<'VVYu\]ɗ!&EW%F#%h$z:W`wnp1^M:oFC{zyQ2>=u*Κ" J "Fn+PYW蟝  _/fpm{_?0wF~vc3~8s?YWo Q/L/z ISL^]s(ڌppIoqt9Nn/}?l/̯n\=3/ 7y_.9tܤ~2|M׶ߗNj/fLŝ{2߇pO><]O:_]ŸQ1wꁐd@qg>Pz =u*c;&0ͫVL5T?G:S8P+sjQzAX|h?w8 Kڒ$ Vr1J̠`kTqgwf\e[F?@(SŦ19k{͏tnr`9Q"^dN^,0>eǺ }E8K.;Y+}ƒ <Ê'&a,/DElRtA~0ֻ})~޿ :03))7 @͏ƽnV gL#uq:Ϭg[?0 {Zx>~MKt6N{Φp7udux&wq!}MɦtԱ1?LJ۳\ Ġ%bwZxpw(2CyQޝU$Db?T.L*>,az_O`>=4t%/sY}(&z;qﶉ $VZ|"昇bM@x7I~#"z%r7dC ոKV`>x'HDaK "@Dxq҄x TPzJ+).^(rl.\L,յIx>8!lFq30to{tFxF#7-D z# Q)ݯ** 8cc ~o7^!45ZytD۶?iqEv~yy ϲ4$#L}R8{;!j $$/GQ"ySVjѫިjDSVWnon<)Fe$GjRqYMXio j(d쓫3i\@BH˺b}YwWwK⡩3&;xL6,t9sw' $hlгt=GU7X,iqfLypm+*|WX&@RNGFClړ)۲;؅V.=};muV$6 ryGK%l/]JzQy841U}"W C@YmNfߠɾ2NgSr,mWrdUuwikRV!ے ;õe{VX?{Ы g:ٛʄM fi=š.96ΙՏ\>rt̺Tzq?,v_WmuW:oZm5>eTKW/E#?lM w%\ux-lܵ8$~>2#yu[I֯.*͆bv=cotV_{ T,_!^q\Qێ6C&w6sCQ겼=tu_.D,]eѬ7ɆkJ|Zv4gՍJIln_Vl(Uh ٟZė\٭7m{YJ#y64gU6|X5MCERn;]JM/+&k7`|!ǟ]XL-vS("76ԸM-6}f!!)n;xl6CVB b"e$^.({Z:BMKb,ԽiXȈKj*MlGw@?eߟNTzoCy֕o~(t|hiFڡ"F:#w{31$sz|->;wDt8aGWI)s "'YngNKԀWLdkD|/&A50=?#o"c ;A b B{O+Fћ ;DD{qH7N'0[I84yMEKeu^}-7yM)tMBWI3T0uMtjCNWnȥHQ7;^pwĩ=Gd3W\G1ED/0lDmg lg疻@F' >9|/U/{79M3 5*?iS xLR?v=Vru2A7Zh Qzy} =Ui{\B屗Sn&f ?Ŭ^E^. L'8xUԲĖf0涯a nnz+ C4av_eO3aF?nf9h:ᨯ`>d;.EF~|t>*ZrI+Ha]IEb|oK##65>R|xq{\#e=mШ Kq8$8vCKTKپoD|Cho2"~iY><k͏lXX]i?5Yw'RocoT׃㚟3_hN_g>̿e#: ؠ p;o i/yC/'ms4u!OgukaIV`ݔPwE >4uMGEk 38F~DKXD,888_{-gb■nKᅼqiWZ9L1#R%gcuer<%r RܷD9 k4}&$)ܟ̠<-|UkLMtb1(~UgI'~%R~ j?)0}`n _a X%QAR$RO"PW X>!`e0-ן!m1q~+Тx˚v+{k :PQCϳ sh Ah (ػ7n,W2H@`1 b U$֤jtxJR*֫FhQU9|˜qeH2 OJcCR,)7o3`j^?}ћ|t@uYlm^sC2^Jao鷘~{Z z[/s[)pjR$TC9? ,I9BN4c L)e: k.-b-W'׍5H >ut߯_Hf_yTY0>w(5ݢ_?U0{yF t|qݻbC ٝqM'tcSmۏ%c+,]^8U4!3#rx}8()l~ 1B\^ar˱츕qSձn8ZJcF(ʛO"l#>ާzo {O<A,)T e߽}+AJR]i\㨴fϨ'V! ezl0~NX䇀jiMU.w7OW8q׫wN<+`M*l xp\WogLt,(d˔XQkx-b3nO#h޳3uԤ$ KMΓz_l{KV A9[/(F|r[j`CB\CR} IJ%8;iw'TlN=REgT%+1‚$r/y&7*#q",L?D pk 3M 3. ,qqf#\Rs鲌c3 EHjQTY,3e Rm',A{X@,%txjjDZk`jl}H@ L0):f2 '32"&X `6"RBS8':ME([{]Oz Az/5kw;PD%kҳAfA&\ jy83'%AZ!T3!-2•fDp$b_#N`I[P \¥H؂y3jQc J6X3t7t!38g7t0EDk93x~NC ͔Hm3}'ޙh`@Ɍ &;jrÏEfa(! c!v3ijY޻!|&yEq;'M=Δoś%vސ)f0{ GTLa7mR 6tx:lh"'(?+׿qwUQCQRE +ZrEiw0: HT rG 3'3 p#V ΕwPJ#l _ >$$갥?F;X(;hIDfÒ;zgqH4d\ջg[[О H/KMk(14g}u32jAJYj 7L4]o7 R !o^_mFj @ŜT 1jJWf0JT(ɜ6MaN$1 BT>yUٟ0bj\io>ӗ,ťwy 񑢤(p KGW\&@6lׯOPD4׸n׈l?B Wyػ'X<cݔЩoZtLDjRЙJD["?7H4%bQBdV@Y&JTw%I*nmu~w5,pVTHDIαuRF Å B8y:,\B{(2u! :6zۇ]C<obV;aHdZ1BePR˝e Q}R6XiW8kT`LqYްy`s=9;B2%dbzh.Ü[CBbKE0:9DEv|nwE>`_B@%:'܌*A Ȏ3GV#իl)g+*s.nAo&rfC̻׍*#\n̡ E_"qwoշG 4oDh؄a!pN6bХ،tPF::W^)Nl[>tc'j7?u.s wE.fUkYF yɹ > `y y.l;Rvkq $m),)v:WPꂡ0:s"e1d>,B/ +X*8,`qInk͝×m+KNfu5Zx[QM(F鉡W|WDw~84 js .5O _P.\raBIQwVъBI-FNt]hQ~{SN;c!w'[Z uKHpIO= ~v:}zƲ{)N!:`; #X(hiT=1ֲњZBfWpbLj BTTZ$R+.LS%A:M4$u[(2%,S9y>yfF#\"_,LKyz sY,陦WX,lBc؅hɸ?H%hZwyZK\͈P0dKwUUX/^St_+[xmVh<<%F{ɝVo "p?}eI;]h⾼H@w|׏\wFR(oqcW#dL 5FvsDQq4oB%ZlŸ#(!V&oփOsh˝(-hT:< {?`n#HP23 f^ɛ=DZtktN$3bo9b9兄n|D#ΙUtMm`OPg<{/n̦@d*2-0Nc}A8Y!/˯fmU=i'Z$] 4Tؐ '0yЦ{vX,bgD)M cp_<,HB={59lQcު:Z8(΂?AXcMg|YvR̭}ر_gjiO<(RTGƈ:$v)ゐVص%pHG]ªfPBdC5.yL%iS%:jCS3w Bq\➺|7N.AN$?+rJ%qhHǝzٗmpٺC/.yB]֏nCENS&a22"0iM>.+Эq;ZuΆ|R"&me>3 Sމź,DX'\bxnD){\bX$n e=FtYoK^y(-uVSt$Wx[C2OVhH~,V_d$fYM51$;JELbLRDSX%00JM3+#IX lf+5S]} kO]WBc[ubSap d](aLXɅM9OUJB! KK#º/N|U]i[4~A3䕨G-%i"LPM(u*3Ma("0ʰl I"pY ^t|BU21S |~YAh,|-?uC}Tm>~O޿[|S.psESY+*O!ўO4;AMi+k `Uݠck kɗb$".W|)+s3܈07sq3߽}Wy ,'GpaB!׻"gp>px< jfZLSyOomKy+oδG=f\ |P֖lkRF #ZHA~I"E\Z{ꗒ/`F"2 5F0#Hc1@2ޘ 1F2 ,<ocR"}C@ %gHDq~b?>GjznΟ ~`-WZ]C8ՌCfdÛF}y9鼮ƓMV>ӆ+" R' )f2}6;|ǃ,6H$M_U*!Dcv6';*8GA! ڦVx\xl9R߀TV !J DmINi:o-H"MaPyڡnszʹQc'/8f(k|B1 $ }%{ ~J{Uw7|[כMAbӞ7GFכgV/hp T9JεQ统Z\*Hёkǭ}B 1@ fd:G$BQC=(Li!YBR=4d$q CG @엨GP8Y O&ݼx3N5\ _ی2q"D"?՟G2X.,&\( `Q)#>*Ox.΂O륧YKFt;G_7k:YYuY̧?}@)~@߇wI?}},nH0#GYw@ |\7|xNU_( x|LzۮLoP:J_ŃwH8VUS!(~xۣ=0ݔ {F|]n̙h|N c!pj{`Src ҅ hb\WP (m3X^ ЌqǤ*: 0Fem]TAKgm3PQfRXxS%rn8#;r6 @\Q[It5J]\"ᚴZa(Vl7'eU5D_=; 9QU< E!*pc$Z8UfWT-j.bAsP$ԨPa.-ޔ$q$JJDMI k-jJE%Ą_L=9B-eT6Sz٭Q V1APC(C*DoԞMI%WqZDoà9ћˣsƢ7iY7ћR-\Z EAAk˒omׂUxpAĤn8aԶbGEiԖeVZpr[Z.^ӳ -TZ} Eab䤊K!ͺ^JQ rBzfӦ;$SeA15>m砳2fu#3EeQsv^ †`j_θ\ΨJJ 5붼yd ac'Ҳўo&UQfỹ] Oifz oGZ{Saљr%oj^o,)5T6%zIͽ_U_>E+\&ϳ5c1:(;̑4A5FHUFZ2j#5ӒW`CAZo-u]Pj$@Ff~i7~Xjeԡw8r2:=:6txs*_dʸ$(zjb2C{*vMJfSۦ>O7f1YɗN| d-M5ZN:O%Ӊńa(P(b|DGB68'#% I1{cÊ" #^":BL#pU_OW^x9RxK\gil/ʼ* C>XX:<` &0s&DBktLvq*C#xGu֎)t &xs";BxlE*+j5Rj¥1=.C%E.1Zeս*61]BDj IU{q)m"{B萠0%e_{m Vgӻ,PͱЯr! F,[Cq*j́%#wDtCBy yɩg w=HԌx?(>2c=Hl2HB c{~r=-XQ|aJ 6%iv>~f>4)f!fZM[V1<;z5' -c9$$5y$b{śכ<'[~frdE,bK"b0܏ %D@N)! `Uz8=YTRũ${v@ҾΙZE:g*q):ڧ4VAyG> IDAEIa(! c̎ɱ1 H2? Z%}q,#> |,C@H(: f+%IgxςO륧TCw_JzPؔm* ŷhZo[=o %VMQKyY@l(;(oPz5m̞oW8J>'5SoMlz|L٦/&\ՕܾS#cU ?u) n*ȘB^V93o?P0\B_%ޞr `;TP{ 3IH ?&oʱyV?Lt[Ux*辜on*{}H^p̂B]%NTX>nf)!%tZ礔ɒHјɧCYBZ\T<538؁9?',9Xx˕bՏDn׌o7߽FysdAdҧd0V?,3lxBW*SýksԴz3:1A!&cH( d (erS0WT i2:1^ q!x{?<>lΉgt.%b ub쌨ܥq;!eH6MɠtJ$⨟:CMj] ˜'aASqJW/3t4h%,wW] !Kn2s@Wj9qTjFi/]$Z{/_BCo {P&=#K@/g"RJlT1Rtn"2weҭÁ4K~UBI)" ;`$ro+'!.Jɱ҉!"` v@gp3Z$ [u'=&p`퍒f.ss-?fzG<:ZwTYῌMȱ`<0}}ҟO0 7}U%l{‰.)ASGU9AAX7<;jk|g[}۴0J3[ gݣָ #nGE0 F SMJmav$57w '<zT%JhS۪Za6{mv>mo^7mB'ۢq+l Rnyэaa(؇4ɔN"OJ&M#o`\|KMAYalzFA×chV˫FUa=Wf}wx1i8>QRIZ8B"O``Q x0V$U< 1g*N:Yq*WPN$; d͋7t;B 0b#qŐqKGR4# )g} f(ޔi$JU.Aۋoibz{p})Jnq&U5(Fw@ |ڽ\7|xNUWP$OZo5 L_g3U^cQȨj/ pL8&Bp[ocMB37lxH]h۞˹VvRq{Ej7U7S?؍43OmY_̑Uzrpf*dfe\ #K*J&,Qw@X)6ht7nͽu]av\r'藡f>ry\ADE6]d;[f?lp>؉\?5ʞq`-L52='ױ.nEoI}PdA&l}̛ڕ7jwYNQ"zLGbDz1TGU*/F3wMP|[A Q:|z*R>H[ K(ɭwKG(A[>D޸Z5rp7Zw+!&avLgh|Z`|'Z*$]:H~낏c߯ {䞓RCeJJۡ/€TrwEM̬$T>8ia5hKwm8E ])΄Zi2Z'9+mI%j*j.vr\bp#,1_au2 `t(a~i]'!c5 J9 ֖W.LsDPS`R\^ BsDS!B.d'eMŖG!Bn03>e_5 enZoZSՑ>N8i(7hl s[_P1Ap] 䯡TpNȽqVQIB0F!&3..W𨬴\EmTؓXUJFふ9vttmj/  G+YV'Ng+B֞}v:t!f%-jũXL jFZL}N:!T5lyOG (ή^nEmH1]˔h: Vu K $PkK\a"W@"H1htM`M(,uً )qjLp}xj<ɖ`|s0jsHt4]0OY *y[],zD"Zb^P  B#I-Vڟ߽ HVφɗ#jO ~G@NE1 `}sF1slM `'_ 1(&u7ӋydPy8=UN-6-k-Z9K1 'dBpWHF:,q,Z1BXT7F(o/ìGΜ-m0QmVؓ"LlțەY = VccTɤ#W~f|tJ2]Kǯ9GlMb(Gjz^"ZV2jTNfB <>_ZBjv'+ds=iB .5-?gK'(qK 1="1:= )ҩR *eAqPѳaY jO!+"؅J_rS"\K5.Z'y!#z+gN_5;%FWDbMO79&uET`'5+*oϩθl/V< +bxk"q/} X+"HԪaRx+)jHOrdH%AP$z\muƤNopJ(uR'A3 kBGJxX: tX$ A&`8%UdO!!"OAc^uo7xBms3v&}.G𦸖}f{<׮|sA0 7bƃWՀ xXc5sMG]\|r Q8 nLZ۔>&攝@]B@6Si2 *IQ ʁZ)j"~2RЇ.k:#v1u 祇I=ƥNcB$>jX#!S c"ER@BR% j_K10n0d+VX'@P( E;lgHNP1#3RjpD+h tBȶr%blŶ~FAeP7 ^c X/w.83[ _e:CO+n د˜ͳ '0wa/xȿg"|\gFKdFQ4]2DM^s}19=i -`) u܈ Dx=ydۻ̩n]'ڈFohdQCiODE\U댗) X"^/$Ĵf;bIU*~k\@jmZ$>[۞EZK%NBNpA7ZsW@.>YiS@֭?W$9Λ,PpfmQtL/NGUޅk= ZQPX7V:m`Vzj!A@'C;>Ġ;^!ς!9-THN0P,_MC3iw.nٕ!bCbM:wXgݻ%JhTP'I}"ۦ5YX\H]Y{ud"ʻhq|6h,ʱ8ᱛ48^T9J. !4-dW:Jg\R)ChB}[% A*PMSif\mj۩ĉ`Y Nl| qEдENxbiuKpPհjtyVH8) I5!hnvNHsove6$P3A$ j˻tr4oKH 9VJ]9)~Nэ6Ӌ[FKɼLϙ2K̽Nޜ < zj=s5X% p6O/esǏ6Yg-6bicǪAF*_͊|2^Sr y;k<*-7Btd唈#zt&Jb=mǯҿi&/$vdo_ڻ?]9eƘ4)Vee7)k~QIZAdVPX?EzIʸ'S{D։ AMtJزTh/c<˕/%)ګh|_/< ਹ9QD&"|K?*usCT/%'IcϒM|Ÿ"O'"z-u9Đb.>}B.TNH Ulp\bŻ`iaLJMaK9RΥH9ŧW ԱɌ+B"\ J0X˜,yZ vko7CjG8 sY@, (`Z[[! Ih4!}.H%/n7(.b AZL{vuK)Q,u(e]aAc17MPϛu*k7]~mDN-v D+wjo峨 QSA(#Qq%ZŠGHBi< AWZפrD%K@B!(l%V' dA`MwD(?;U0S%V)D~ae^gq"N27:$ ^I6&{uɼ̟q(c3tUčDֺu l[R% Po,bEMBTXX؆  7 'NʺNY/:@Z(]i}HFǀJ| 5q;aaZ4pP$N@R6 T1dqQ Sa s#sc1}rS6Se6m>Ք:J˰PϿ|x}%$޿{}pB ʻt&ПV_{bfIG=k|r='#=rz6T1I +A8Z\A MnIBV]k>?!rXoyf-Gi όgSxa 3}!eլ6蚝%2ʘTkΒXi R L! nY' Q Q6/έ?eCWJtZUu?O& SY.JouC@YDj51Zwٲ{;bz@O06 &"14)5F&T* XJ S@*ETXa1'E kWK͈)D^uE cS8VBxlb`x cښ۸_ae+gyf78٧l03kJTq(;ޭӘ1$Ӊgh4@"M <Y`+|<t D/?>+>xٝ䶕J'Q ,o)\IVG1t m drWݑ>L%d5ġ)?1DR{'GyB"hv0eܔ^´hJi{\V+ٽSŲn|Tя3n˓W{ȮRX<,hX]Ljuyq"_.ƚ( uG-Oue27kH'"zr9cޢ.{i|?u&j1ٜP20FO:FB=U< NjK{=V˯T/kV}ۻYh}j40iq0kS˨b nd= Z b54HGO ٗdÚ߈mawN77ԊinGo!n5v4hvoWl9,xP}+r2o-m\3ϋSeV9Oު>_~d'f͎% à劷"Ã\cUSv h /,.7O^.IQFs*pkt8w_=?\xQΒ<-mPo9x'Te*|:5 j[PIkG{]*@6h&4 /յݚBs-X<,KPɗ`+K[~sb?g$uC(@jJS#0+լ~ ӥ~QkV:4QWrc4}>vBIoBU֬txсrt7ߟ"H5\^qqy)+]X}'Kl<)ƹc_` \®@A;6.ÓVS^?/ ޺or&yO}Lڳ S^U"R֐|*S ~Z7pAѺb:czR泶[XOք|*S{MΡhB1QwԱn="S;n<[hE<0(UkN\T'H=O3q4TC;Ab ̧3v>>J}WVrW gea\OWk!(Qwro\na’ћ~alv}wzJ %A')tb- J}R=De4U /_ˬP!%pz4'8<ʌt=1[?qeY;2& HM1-ij~DK}s4I;tyɆCыg0dAgR"gL vR4WI[Ӯڟgh2\|hGm)ZwmhyD D"3]D%wWHNl伺ZU"sBơR41&հ^ ,8%Ș42D+)UǑ62yzPqÔmP4PQ"1IELBB c1f6)ܒ8J02\RaN(VĒZg/ΦZ3x4*WCJm%&J˙L !5ҠD1"H b#3Vr`4YU#$Rʍ"-,!R8?lZnR"&ZK(b)h!FA∲$&TkAS:wR%fj}ie22:YᬜF{+V#~BaR5Qs_ +U Z|m9V^Hs P SMJe1xx࠙4]HRZJ*Kj#a jmΐ%ςq\M%~zTK-SpXZ-K)qA9ljXQ'cE\;PT_o ` Xh Kyʔ4;en2Ē08JvW J)8g]%~}ym3wvoBB~*3ZsT?5'S[։MUAA)%LO?<9WkXZy }U lYF -{hk7oNm5razv;ֻԳt f6нjo;u! N^eᷦ?/yQ{ƧHlŊOAed}k7`?Y.5B>19hCJU敌uAzy!7l5qW<{3.ēh&9 k h/!68ҮyCQݲFg_]]45uA.XY-Cۭ-uFKƇ2Zn|O?ϲ6sY6>_^O2Ɏ+L g>ПϻA߽g諁Y4{p4vHЋwVc ?Ew yHL\?ܵ v4;M|Y,?yOY==z+!6]2,ʏ2 0mݗ txxN䫯 F)}ȩ`&$Bv@_WJOI>1.{ Cj4Ӽ'JwT_6GuJ2uJ Ѳ|lϐ||{1,i]jg[*k^SsyKzH8.pj`Eڟ,.Xb4RX&2|~/ hȂ4Yi`Ych9@#jbI5Pwt(A6q҄6^X(^߳(VPDěZB65_uj0HzZ rtzAaO% aiP ѭi²Qe]R4"KSi<1D%I,ՈFM(2I#q'p]r*CTXܗmGe\c ~|5R ZUXk>`)-U"QLtkoja/T)CiS Dd\o0ѥ<22҉=El5n>hDxJi͂ j?y |=tnSGoܼ\.#Kg<}1Hʜ>>6!cOC1}\a"+װ 60 փk2tVeZu*E'y0%Ne"L:aIi:u#IP;tbmtV1mo{1vbk!K @v.+a[ fKy-4έ-WBcjڹJHGG5>N=bVclZHb@ߣ u0ȐV ;Qޑ6y VVWB!iC*5}i^ YCxmt/{ dQťB-U=3ƌF-ߓ9bmqt d5vm.nu aI5\F$2BE8L%g.N`8G,3Ҳ|O3h9q2]T:Hw9p"KVsHDf$¼ ɌL[ay*#tIFu(3j2s̚#)1P )tƎbM.L8;ZsҊ;G9I3;j Đ$ 7LM3K/S$+9!ΎIG/$>PlmrQaL)'"sA]<);?Jgwxv"jP1K-aJ,f%jQbнX Zh&CqG%үNǚ0+YYlߑ; 9ER@l5;5TSK0k`MuU~vի񡴫W7i[x]ӭwu,KuU7Ï?=>T![wǟK o2 =AX <)_ތa&Zf! \x;B2V߯g0,ְpw흀d5ɕ\Iۣ۫?cc ?}1'>@M`&#FF ՑɃnUDgV:a[˺ a+EcR ,"SB9N#CHJ&@&qv126@+$!dݎmekВry|46oXO9g/c9c>,NjyR` ~_Vc?-L cS=cr;':[~Xv^#̭yӥ_|6ˋ,Xm)x.txo&h*0Tnί%1:{F'Sqx^NF]3xdJ\bOe氆[.zGPE 4׍:yCSs!^qc5QԯŦoCQ"rSo}Ofe1ܻ;u*ê*"HmABi˳d$ t/l7D_v2Vm;sF!Fw 9mٌWv ZghA4tm;i|l}oo~-4Ftp&=}< ?^x5!):k Q3`|h`֯Up4?{۸ٝYuC` $HU൭l=[dtXd]Hi42Թ9GDRҁ5B'Nat`Jͨ+WZnkhq5 x؂c^='Ik6ѷy:8F_FD>Z̶FiH]FXE&WAռ QeRt'GdD}4+$z\:a3Q|$pSP Wnβ;ƒA0CQ.rT<'P1*LoP7 yۚr)|ɰBS:xcLu#.ڇxo;\|ǒH^fOѳY('J̿Ԛml=;f շ>hO^wj !-8iTecCrpX3"ʲ!;ǖ!J'Tg fwӚ87B vLL hg@@}2X0-~a #b 3;I?lUZ2B1_KiHiILkᆦ_,q9R7ty`$g*Kl"o/c탣Ux;,7FennO,i(~0ʾfUz}m~~Ѧ/zݽ GHE2s'qῡ1`4AJRR  )4P<&RAK$Hl$J @$!0HefS\rJaBNPcAS 89Hj$.yRJB\UiƝkϟle W'>ʾw46.k-Ͻ.$f"֒"Bb dN8Q"YLAJ%G&I3&$MI 1"Ő3 ɫXHxZ,$ΩVqR"XVvuUvR?e͙~ə v7GEnTP+3S Ѭ ?R"'@D) 3KhF,$BA9J"&r Dzǿɑ; ,RC{ﳢݟar׸,Q.OҷZ}e ?O?k_q?cur;F ]P>q7Vur2.Q1P_(g[;-l=?1¸>=z$N'\u/x N4!q9uT>7TP,j*[y8דտ|,y|Od[&)n%[ȑI"8ٳ΀Usi:Ĉa0oG:>t"v[Y|gNkZ:":5B%ٍܽtӆ+y+L˱$KDı01:/~ Lޢ 6LqwÈ*h&& VRԚؘ0x𻯜9T2ThrFĞh3Wa(eg;} Ѓ1=s zΓ$3Zq.ɢN*ar27 0IEbL& 9K ƒf) ͳ`"\fHif[4"9rBǒ*I8 IEgE(!1FAgy=!>\[X=N%5'UTف|ufoH!y k gP`z=pZ{ϴѩg3}ןk/}M{)qD}3ZWҶTp_͓n9!ͅȪi<=stPqʨP@&M Z7Ǎ zzEqk$* `)&{=u=i{;,-@~?`?;r1pGKY}EV.LNj5.|GQ(N)|5@Hw]cFpZIDAeHJ(>}.]2 䬮䝕#@wXݦt%Գ0[_ RE _`K.PA^BN]F%̷VAIM~Y3i*5# z5ZWf،B@ ȊǃY1GyHeƍ /̛O01C5\jݳSl KVw$\FхhN㓲3ӑ.QNGݴ:OCP@ fظy:8(J(uw~WV}]Z5FR^/`5'^$Dh mHXvH,4CGt O]ڕ)<={ʼn9)T+\8 4 =QR86sxWH sIȣW(KBBEeBGp IIZr?P{#6W%s !DJGBдso" GQԚ+g=;qm\$Txrn0/CļGx.9bnwLcCx0*JUQ@cУO>n <|fN9%/oRB=-fhQVvPtlx`v|AȆ ه*F<{ʞ6 n7je!Sry~RGe8A~p@Kz^cEk;ގs8&E@^٬t3B"]l?Ae$Bϲb.p3xoU^~י`⃈r}Sp7N78jۨ*#a1o&H"I(Ñquj!cHyp_}1h2Şd,qZK 0N^̎ñSxFK9&82Dzak}iN,Gx,lqL|AKwy'=%_^!'oЛ_~qB4\rHqG~ΰ!v%v]5YY@C a,I Q7R'}/v@(|b=wo>v(})J4Y'R>l(B<"( UE>((c4Gp9Nd4Ci1HvX 2r2 $$1,QBS(|I"@83ΓGRb)2̤V(;#aϟljQ*ki}6| , @3.uUWuK!t(ZAKeq53z^WE 'QYoT?㛯Ό1xSŴ^!N"2NO0読h;Ža+F`+í\=\M9A xЁATӊn4Qsc:)tzmj=z#J R+'RȎ)p,,Q^R*(Kl֪ќxk EAbт`F`)dܳ+y3Јs cymKJurGȩt} bS0;lR Ot ڊ8u AN؀W8+;,<).`Hۂ%E#4YO|}iv l 7~uGKJ)\mSMUQ)b+EEI&<:`T9NI3YR ]ْFRJ$[Lg&(;Y|VMlrʅdZA9X$MɒDq,)cD)U\-8H U:D"Hx~! UQTz^fElYoǿ ܆dK?4Lt#AܹOHӝYZ,,I\<ω4U0=mZuTӃmr!R߿-E#?>߿|Xy(?n?s2T)gC8(k:_ߦSՇnpT]}4*=^^ʦ@1\n=)c4-/ȇbuR5/2jQUkUm*ԘgݕNy$_٘HQèUu,}Z!P'ZȘ5L3WlwQ}Rgmp;):c~|]BZ2Gzn!N;)=5 "EK3o1I9jyNI]c` &1k#rvLkR=:ʣ8s#k Q 9Na[ٖ yn€|}I3 zx zPVVCu 1 C#uq <a!E*Q9ӭ<^De5g'._N=tDэ*NbP3QW7JsJ'׀7֏w ro1֕WVCV^+;AR@7l!Ne_ևyD}}T/$BZKϧ$6$M}S°U bVybG6U10H \^ᎆvJ`?ʫ8RzƴQ' tdjyGhDqmhgJ;7u71-}DUsE5Rɋ~ӿb?-|L9n]{63o-C xXs_h*f-y}{\]y/9A|f2?>-+vF / peΏ7&8B۱Ѕ}eյo^#QLu<̔S#0*iB8y`GDl*q i%SmpT[2SUŞ/~VI`J؂(C%Cmv3B.`'+S>͒8ipJq&wu/Gznݝ9n|E7z'%igket0!>&EZw29wr68 Rù5j\.ܮkPV R%"g?;I*5=`W)AweyPDsL?:M} Rxrʨy{MaҜc9`>MJ>`^s3p?:8sSil7r^kSGڃyU&U-pHVJ"#ICyG::S' KܦWy@ȫ8c§iƫN&~\aI"m. 8.NXK%S;ZRE1yzQ*`%HKubEDV϶HU9 [UUx`J(kܑ `'&!0wqS_Ϳ0S;)T"zw8+7^b>c`x3CXTA@W$*%x(|?)2޽B&8κ3\WZ UKiY1ɼfG.(q9ɡR?tBIQO<àiEmϋ5aSZG0ORI% eӊ}: ݊W /u;8'&)1h+btb^}?rN0T?5wFc}~W֊ C^n)QR䂎p>7\=p?o&1ȇ뷉!=ddDHYEW6?tP'' r93iG=6t+q"r.rS2LELee(YEq,yI,<2%9 K)@%U $cxpHΕ0YƳ,ARH `L(cGy3E@Nl('Zvf[IR Cza#f>9A KEu#5v#YW|=YXH EF!tޜd $\jN8N3e yK,QR$7Y4] D2y>Qz}Q#/UEk i4Gf{e@TωУw}-&OO[~]6f@I">1J@Q(UK'$I#q.q{@&_y>tAEBP @S sqί*"=FDA\/Jܦm0o.{eA Bm/>n0Lt )m ģ ,`_v7ȫa^~ W Pʖic' sSzzIF/j0r{0 I;^gO@Iqywh.O\R-_Q3g !띀ØC׽ B+͎FZ2TQ;ż % qw⚴#AqV8۸ A4NpB3B0r#%Dp`J)S~\\bn1$c0khAX0V߾7"ذxL\ߠGrgS $ͤd"I !޳m,W}9=}? MkvZ\PYR%*k%%Q/%RIH.33;3O\NY5 ]NP z!Uz'?#0kֹgf 紵L{hH/X];~PV3rv:}4H2B{p+{p9O0WF8Ť׃9t܀ewqAq$'iL14;h?x3)Mդߍe ƭ?.[&EHi+ $Ҳau8Rhàr\BnQZL sFAХaHIp0-YGOQ0-Qil.Q±*c|T]˄Tmu^AӮ #$|Q_C"qj˜ D(1V@Hc(B$Us<3ODEQ L !v$.aL9Q31äg=^yh;-^qjS[60A(tV= u>rJJ&8+kҤGyk?*Z.bǔ^_c30>U sMj4v`M>4s'D>T=ʌ!ݷtԣ8 [@#7v7j6ӳwg{@ s-CdѤ iz]+3˧n|-+t_gu>.4 ocdeh4Gҋs2 ;cuM%Rj4SV6 c3)`Z(z߿?P]-;eЬ1`Qn6$ nZE27xoo>{pv8$GIWO3'Yٜ+U|o;o^;u۹Mn7N777%EFrڅ4H+z0 +2l--iܮr*.PXc\`m6(+<l#; Q LB {!A{EES²7$H,u۝]sEQ 3bӌoɸkgA1*-8jw6R:apYNex?Π޵HjA]+=Ԃ#X-/hl .~9vxqE鐛۔gPl^z8GֱON4;鎼͝%qc4u65(rc^]&Ȩ6J ,9ޞ&eH 7꘏ploJ4FMEƁh3n3ڮzΘ{Pcz։]:k $dt0ΐ2qC,Z2Ǟu^n3qҭ,N>H/CS+n/sJ]p?Et}tt+xH}H\)I|ʹ(^B*qP3ssar)槐n߼Pxkѕ8 ?r6ɾ# ^OԶ wp=,mpp9KS-A{je),sU_1Pc~m7n=Cq#Z %ؒD sr*N{rא]a'gS+Τ/EFKA! gώD?--uL-_C Ik#˅J#@pl?5$Ѥqh刢kYE b,<׽L! TӮa-D@?=dB9[C9QHtLnңuUĔ %nM'T-B?5 ;Xci81U[|+dxfǶ s㨱+\~!j^VRY[Mye.-+8"=N[8F@ zʫ8 *d@Q2E!k AJ <%Q+r4I4ٌ<%U ɉ~(;j-~)2E(#}>8R4JĊozPE4[s֝%5Tֺ);( .oBcGDDT2kV1P(,)%w =1A@y?UZ| # 0a>F` Ce@(}F"+͉C '2YSJ.iNRbY `ʹh+mxr/Q~x 0f-fuJK$" u[xR/xy:*,("2E(Cy@}I"E(+^FRֻh#o G1y;hE\bRF> J"B`~®584dրd@h,;K935lńSVa-զ0,JC|.WV;oC1HXy#Xe1bm8@D[<,ldlIXAD-o<@B|`ћ86/5~S2Ǎ)]H3w % Z#aWk8wz͗S`/'e2*HS<+48# BtǍތ)'6&d;!P!iʺk1"VM-+灤(/֬X( Rnuq@ #04eQ^ PuE0v3pӱg"h׋HRoA5a:ga"KL#ln?lt.s;Y@7ݵյDJ,![yP9_ر TuȜm j;)~725~Ho3۲fɵ5 .7UmʜV͖OzoBkBܺgβ'n Qf5)#Mslv@lF5lq`p:&odaL~* Ҩ8w9$[L)wh7ҖL <0hݎͿC#iwi^ F9x\#h4V5(9nAs}ra۾4G=~i!lƝIw(}7?{Ȱ/5-Ӣ_w{9 }@gfkqyrм1Gt}w`iAʴ'{—?BlLMJ;ϣ'N:ߙ_=k޸ILdOiA0:N7al2՛ˋnu{a}2ε` x`OfbdZ\M,`nNXh: o3 GC %Q?0tFKpo~4$ø_$_:_ §Sx _J uq| >8no>ތ휴W(|g!!3ZWH/ͫ8@`^___ dkXj~%^Vk(znp;>ە,lI\%׏&CFSfd3KWSf&'ҝaG7+ko}o`9xq5(Ds=(QJ(_*,%Mlj,ߙP,/%m9xH ja()[8sSۣ\sɹ#1w}uz7׮ny{$`쨾G2$;y]d)vr4Mgb֮DҰj#Nu!.rS!]8C|yo|M"WURv ]sf2tmx%mؚzٌ-u#3\({HBa~bOA} /1_yϙ6ҔJ  "HRkA5(T x"İ*4̆R`…d،}$Mszx=k&r.mDֺHzw>1nF X̞b9A ''N`6&u?F­ F8Nqam)QXHbj JNAoi8=wN lGm^t{Zaؒ LE-3u:b(Hm=@\HN͞k?d?{ƭ_ 򡽷R~Qऱ8'- .먑eW+I;ܕ,;!Yp$j7gHΐ-o`2)\hNhqU>z%u;#JfH5 >P:g}PnA1o]$R]}"ҲJ`@9+zRFw j۔d:PJ58_ZTqa2e ˆb-x*J %~*]1QYDp>FjPhoLY4{474VX FxIaDaLb1qNt$) ?![isbʥ5E .1F9K t+i-coK҅h?l-r#wVU5f5񸵵qQ]qE69Y'P Zk{RKj$Ό-Bi8~#"XMW7*W'^N PnL :B&R26F&Dp˝cV%@V1 TDFEW7.;Pe. l*1R8m4f, sY#,pV  l(ҨhRϰN^ :v7#E: 'FފvcS$v^bc l/nlECSev[bbt2uiꏼJDX!b˺dbW#*hmݒe m.xf Z&"r[m]{!"t[M o9Dž G A½_"]8{魂x8 pनŭk""i(ANlBculd,W Q1S*R *(;P/Hw%>?je|+gf]%ЈZD%T8"ʨ0#p8LD(#MtXp:WuLd B"8^Kb ô VK6SԂcgD)0H/csЁ\w)j> |>XilE0D٦H)!]FLi"SZw.l3WUaEYޖZ|OpWXnzY?%qm9*G m*͟n\) !ܲϽw20q F݀ -Ӝ:m= Q7_5VEitV(ܸbx^+F@N R[RVԩ-iDE[!֩*PJU ]7WתĞS_7;0<[Lo<]0+@!o$;9TyJTƂ֪.+ aw"ɊkȥguԌ?l2(DوΧَ[ ~c/.O  C:-s逅I|h}%hOE!_NY~aA_xf${@Ҵ:<;,K_x*kRM}rPx _s_%/'F:KXPHGܔ寄%,r,(oF_n uESΖ[`t.ŦtFjzS]Y NN8p|}S$L=to$'F鳧wkmrډWWVեxfrdѥn_>bRyDG§DRN(S)"ֹc@zoQUبia`ķiQ)%dM![XU3y.7(ֺʌT+hzˣ "̩{9ܫ~<)ȟO\2rl{d9G&O> b }3\/f?;;p8mm0\P)!R Tz(a&qR0. b"D\G \Sl0EW;UˀV#˸ VrCQ3R:tKbW\jMI+`>Bun,nƣD%Z:)PV ؗD|\-Bhٟ(hH$2QGI1fNE0]ɉW4O@Ŷw"{G.!Uj9ǿi5ZGjj72ᕉ'<шpUDk*T <5+R~o ^S/|хͯ*9c}33%v 'Ӽ4v={i_~ɮc%q/ؤ2 Ua\&ҟz7`|h_3 3);wWjg|p?wF=!Esx>٨Nox?ft=/'o~=d9wwȟoKv8o]zvw +&w›n K|pM=ucIzaϝ;G.g |U7soyږ/dΰ;,;atogKS#7վp~>zeE~|t{=9<:yhγQ?윛0>67/^:xmI^\(pQtw}9J6~?o~{ /ALnڏY<(|.~Pci R=g㳋gfEe ~({hœf17O`ُtf#Tҝ2 +we2h@MJGUu542x=rY?8ontf#x)]s0rw e8[hc}sa?|Pvg wׯ$u8_sE6X]w@ݟ0ho·Gݟ2nL)~>R73Z3S.gYo/G@?&?I0K]\GŶ;o0g|?>|k?Mӯ8#8ڼ߯` ./L{(x7L3\d)` YOOn ن2> %\0%4aiPt`m X!AH^>CďGrіgl\cd̯Nb>{PSQW`/`+T>µW-5N'\2" DHT1*C.2PS0F`BS4#Ɖ7dCh}K<Hq:O?/J ]dDz`se F5{bSB<6J!,D?ro3g"p/>yw I`; WHgo?9{/,)PD]K1)x;w|`3/8דH5>9![_xsIVr N]کW0 e܊Y>{օJ,g!X+s  s&)ETYDXG,lU8QBRr릩UmF%L`Z>Pat1dG~SEoM?K|: B< һLM sIСOSR=SEDUq3K;D 9 mFy?ԽEXpaM? &vYOJm#F δ!qU&Hq2ka>`]BA-M"ƍŠ\;EDDX$ pa' m E.ZZB-F#6Aen}T u.NՇJ+-t!baȬ %:R"^7ei,(q,/ѹ/_"r4eS5MV?⌰R3Q%0/X\95OM:LԥLlXuː_ \GA'1)ubV2:ٞxfqI1t(X!Yqq NrI'z7Ez^:;F}̜LsVܔ 1cfnCmRYM:j7_5ڪI4<>n2nEz~avev*Wr/B-eآbK/n1$s!VͣfxCj؆}1QMmW%+ um@>oUWiyR'Uuvzk} _YlݏXHͳ]jr˚CcQۥ|Qpe .KXl)p&VYVZ!X#I9fNhBbkIp-b8g%V8K, RZ@&Hά!*d a 29Ԃ 2;A^"0?RB[̢%F"Z\¬fR9 kb Á2n(]TUIÍdgxYn&k0'gb+B3~lڡXJ+U,)*D (p=(7RDz?M#!HXn愣 RqW͜INݥ xA) B>7ʊ3e:Т Bz=] NjXr|Kmxڱn4N;tK_u Z NI!鯿>q) F A>RJI FRR}|Ĕf󏋖gz8CjrJtTҎIS7. ;\PzJͨ5bQS]{sF*,U.y?>[ʉ][WYk $\(!)ZU/d$ 9ӿnAC5_,EOǮ)l`Ӏ)d`i&c[9 71*iw\ heq[4^ywq"nQ2juv-Y3UCvl98&`X@H%b9gof޷LߥN=t5F~O`PN"[`*lEKZX ^>](C#dTqI~$: S$ cҴ I' s}P u2uChEq< |N'ӸEW(ʄIz`jEaXVMwkʅиG#r}!UIƸw..KUhhnlgĕM (I.WNj**.\aLtd<ݽv[Wv<=pgC3k^< c)U3=gË߳#ۋAAb1))n(z3^|Oh4Xxvxn,"(nJJی d,,~p8B7[*y?>ʕ\r:7N\PBF\LcxeW:Ȳƫ$L4JA$FϭNk /#*[)Z_}s.-󊻱,ͫRm4-ѲELsQ'BM\;:.`OH$.~I1^q*ڌqrbͮ_}KNMXeGaMSL}I,‰I`Y'C5l8.@*כzIXRe)z7Ky4 1f&Yᗑ"8dDc,ʊӘa%JxB*QDjF^ig3v#'`XWI$xP>bXACЗe3-2k\"5KBE*$Nl4S󜥬@ Qr vfo@>Q$_U^hX>FUzɋ0@%6 $"B$AIe, |ZHXlcrNM>*zBFXj k/8R58&`vcޞox{&DxUuYvӽlWcFSC% X"s <0+Ψ& ؋2$28"kƌgE& aS:8ij <3si呌N2͡   Cd N46X벍 ̕nw#Ilo~v;n9]%4IQCRC`i۪\V?ۧ)#WzM%? A YV+)F6җ1GO(LwrTy {%L# t0 @qaJ:=pfgm9ʧӝUٻ ?"^0mqyX\{2XiQ?Em ;E(`S&rܥ 6tpHq#x̖>N$VB% ,D"`@`.%&Jhyeuvҥ*sZ?]lUk]#Ngb(9H/vפ{:,jVVe z(!XLZ.Ose<-\8\l4b[|&(Mo5:2n#D<ςYOK+‘!;Ƽ!,sblv 3QJ#%!LCED 90!đgfòu1# Pk\ƻ؞E~Y60ѣWC})qfpb턏t8Oy:ަR+RǂRQ!V׌?LkloWI)vQM=X #)Edt^`v4`ǣZkh산o8'Ei/|~8ˆ쟠y;g1ßg;x am: ަ7x[~#0\G_uMz:=M,̈v_Ƚ}ȿΦz2^3ڳ;g@} L=Uuziw胍nλq_Y"vBO9Z 1R*ႮtGKVڌ|ay3भ,0x_i |[p=AKnij/*#ooF4V;Lq2'пiݾ)\}^"nݼl.[VƙZ M_?P?m߿gްN3:߸/#\aҤk+׋6| Z{oz} P,4=)٭x9?#uKu_[_߲-MqekMVZO#Zl OqƮOtɝ&|xj{85׶k?#xAgZ}e{es5ng?/nvH}r7q{o&˭LpGyG B8йO>0^|!;1M&qe盕ftS ɿ{xgk(__}~ށ`D\M{_ ٣_@Fu!!'d~C?|ɻ r~pEf.O?|wv9'2ޛlA)L(ٻ(O &??T^[|ӧ[&{1ڀL\ݟoWw|K iQ \zƞ"yQșA=^Q$:Z~Y}ko™ˍNVȬ+;X&$'֏N,W#8+zӔ惢c;,uQd犥QPciDp")V!m1\hm;XQI|lBt$W(T""&P!,1d0FJF8֘2kKzR-ۊWT(oc S ^@FB2l@aHc~=uX RG.!mNH]|ԯ2!3$ڍ5Iy@d3(Me7~tkV݋AdWX2k_ {S8V#|TX?d -}ǘVǬ,?bY2ΑP+0VR0 ./I,S,Ǚ=s=ׄ+Zvp#B9-q#օ )*;5U"BOpTzbƪ:E1ajh-X AjNEO"KǕD]t֍<1Cp:Tԅ<:Dzz^KW b#R ƶHK":",#CV)Nhq^=uN#麐F\@%csV0p -}K#Q`ީnp{WAVdC0:4aE 11֣":D185懻.~2 :UM7o3Dyy/w`* 3O]Kʇ`tT\@Ŏ-7r#owbx!#$n4ͷW(I/rz4ur<(ΔRx:rĈʮmF'-`Ir#wͳzB@i~M"kn$qA>b=vB8 @"B1TkiV?f V!k 4d!PHWLЈ:T\ W.ގ1űL xthE`?]yoǒ*n V 3w-NE}D31^k?Y]\wT,)K%xJɰ4/ZxixpAQK[(d\u [c (xRFrAݑ[k$|45qǍf5] t;9d$xd[̍!k$V:cdDwkJRz xXkI Bjsp?[W Xy\g"X~Lkvh~?Akgv8L~6اɎغDnilv&bmHE45f_6w}zBOVI =nNZpN)ZcF K TDA$(bTdEqh)qeI<$;&T^v،/F XLUc%+ ?Ʃ,T:M:~ݼ|\Н02Jd cOˬlX2¦&^;%e\g\bu-swQpS#g_k$wy)fϞL S{nkc7{yEw[ =mtc ݹ-pOjn5(rӮ4;+wcrObQ)x{eYqV6uNse%:(+>bY2y%GrCgϕSi!-8GOk#dnVR 48;vߝPt ɓpv}h5^ q ҇?8LJ͞^q۳KN[8š<({tDžN xuЎ>HD)6Xl<?^<Κc78g4pD?~qzq/|v?6x}lW?Ol]y "|L~z ]1> S(7i^9 c)|G)z>UYcNpp9V\pM?jO?} \t P}3*lhy8/ۍ+Ĝ @׍Ggn_uMY@UszQ e0Q8L%n.lz`?0 )!+:1KRxM0Sn/rh#4xTlo0pLIo>$e4mYAX^&^|}5&H5h\4I~<r{Ci1Ww g|h*!=\esHg{ݥ>hc~I_^o5/}w~مb Ű3N}[p t;X߮^䝾}0a xRy]|0HQ]^~?`f[yCx-YF/ތ閈PjޏzRKI~|Q*K/Cp׮?'՞>pe|)Jɱ&* ykݘ:KD_(n7D*RŠ*3_&+hz}R:y.lNJ;)g>Y`Ŕ̦vL56MRLI >Ij4 qtܽW ]hSc #v8e"?obid͕YunmizuVY|n2 ɰb^Hɓ"yW$O+՗D#H$r\3Ra,sLAFjP|D CUTuֳDhD|A>hޡ- 1\P:Ź $qK ,N ౥TX E:CV7,mMWP7o+~4k``a% ˦Ď7 `A (pZX#K8Q0 bWa<XKtAXNZ qZpkOGdBo8:ax;(xA%:@aOM@Z2 :8 2./越^W+W5t`duZ7-֙iMk ar5A8#UJ CTD[m,R//FƊvk<pkKLYQ}h^,U"X YAӪBoHYKL|yq-sw d(J@9Qq~ZC(P!vDX)n8E ԰.ϒ A p1f`hpr(Kv-\00L4lQ j6+VJ020ĕ 0J:)6n9ƾˠ@0:@ph+I `oXK<g[Ƚ{dZRtFT8Y i_HZPPNALA ReTFIL˓иOVPL`#Ӎ0pZ3b>"QΑdZy5 y\?,tDos(:x‹靮 %-ųqjA dRD1am\!S@{=I[?LVǵ=Tڊ`VO\Uodi)3NoAܘ`v(_.zgtg$o 5}҇DsSA%9Wo- 7(rYMA$=C|65`N,f0DB~ Q DHX㥵fyP(t܇nD&ӽ,y]H''- fr unA#QZhs ` Ph$㢂 o FZ8U+ UdFJ\A\f]@T =: pm…Cs,(rb5&乿100& P{E AR..HZ`n5r g "唨  W+S"8ck X^ Z0h +mXBYQ( T!M@KXr1׎[W!`uT)ĽM}0o`l AE\=ZX0LRHV2lAbaf?},mU>cZqPF~|'*>.Z}U;E) #rR0#\I V s32j8?@B!]jO8 Eйk;I8ZBY TRl$bhy lW/: (OJZ k%|-x :=-Ǎ[ # xuq_c+Ē$FbI@ b֖)eL-Wb sGJlO,URIKR,RKhr.QK"LjGB \u>*d21#X!Qm»єg /4zxV Z_cuJߢ0y|.[a"¦t3R:I^^i!$5fԉ X!CC`](8!\L 2Y*$8ݢ11NC#`Tp`2|/,1UC0AbJ)˂I5ckj&IWG<-SxW6 TWF"G#j-h'}4R*ZHi&bRFP(b*0) 53L-!u#t`I,̨W;ӑm%"ΒuqţonKNv0Lp|/LJ !+A&ѱȴ/Vh)D -HyaKb-0x="n](!Xzvr-+K3UnCenGTDcTmS#U{ehM6ƙu`j;Dnٺ+䣻x«]mVoQ.;n~o̱95LβOJU'Eۼo5kz8^vz[}p(S%ZK)&\JN;5TC"vxxC:= =s-'eqQڿ<>ZhnNP]UnO+tÕTJ8*TGI,<[}/^Q)m&7&/ߎ0o^ӴhiNӢVn̔Q9`d J$h`&wT2')K} O}>`\__AT0#2|Oåռ|h~V\_Y,s]!TivZ%k|VTM g`F 'CU\ ֎W &O,a`s+,>#+*aHi&B9(  DPnN+Ey}~#|T%P\IV8u62_ -/QTUήzn8>ъSSr-3Zᮮ3Y}3ڰ$A嬿_<,8|yC;4 iEtiwti9sM}]( >Xʄ"uHm~9˯m <+LE _>ՂPdMjP-VtM j|")КGki@MMT]DKׂf ߥרV]J4"GnGmug6fO鋞4Wq>^jQp7(d,O2@bXJ^"~Ui{wJ/㡫xU*'ڠhҖig|-]Ww[}BorVձ&*.| Lgʖ?~@ޏHUEu"]{&i4V>e~R7& Ԅ=z:4=)LE2/-N嬝2 o8 ØfقhpZh@# aRL$ )Ip"`k43z fiॄ "ՊA`@=#HE53烋7}pA3Rʃ*_dQDBzCH:_1(U9`2 q֩&y:6: )xC{I3\ArtCК1F틶ƣr7 %Њ&M=O~ drtEKKJ(a;)xaxThcB& qԔzu5h.RE^7F h*u8˖562̘9:LP(&v!0Ώ(B0""@#`EܺTgKk1 (N˘AC?4(g@yYM^Ppw4\hӠ#<0ހ #aTҟt݈1KK]_+_BKI4QUŋh~n,_5 ^ g0]%@ 3ZVB{ &5z5#evdtB zr<`y_jQ U3۰ގ7hZ!z,ooNI~gD_{3»|s׈vms~봅kPkMx0SgM^J_U$Թ 2U: #mdZ\`l 4qJ΄{ƻ|$%^~UYmyLP%\|zPF1:hH~9q+\Ύh@W;+=+{};!> ۮ9KġX0">ʶe弗G@w'%gK6Zv}FQTu\{p@Bd>A8I8i+#AzbjIRIi9H 9(I^)HT1 -!,;-ܤCg<ıȹNml](jw&mA \Dm1o TJg{k?=՘Pi>R %/{>j.ɨDge̗i2_|V͗@]\JOSaQ5aRiJnw ß|($-|rK!z# n8KC4ܚ/fͯBe=o1^8T#z1ɒJ.9w kzs5VWJAk<,]Mc1a $5fލ-j TRxϫrt׍Ȍ 6h%ssPST 0&5NdzDVԌ,t"ikɪlUZx'65^׷$}o7CĻ|fX,fM}!1L]/(FE#jGE8@ ) y3gj7M _?z=`a5!'7(J7Cbs{@*z ٕ˜5w_]F~EdՎNPЯ4ٶiNyۛϰWw3Jl'a8 p!.m{Ў#`r1vPNʼ5HD)!%9)h/T!HQtyxP\ghtBWN gCQF'Y[τ: ꏺNl> AE|ᯯOJB U 9Ht"[o\hC&įNLye/Rڑ@%@v%hh{%B54or ZsyOߓ.{s8ˆve#Ε FHl ,p(El.k"sYx %yemˇDE&i=vDAPIDN `K3SE>;A#u !ᥨ韶Z8%|ocw\QխKm;c'1ZL̟퓷O>yo0) F( j\QTLayV *-Rf6ǡ(hNFX1p u= Mb+?|ͣ]-F^ryĿ[V-?KԳ";3cu3o_1r\"wȱ3&S!" ʵTP9T(+dd qY v;V Goйxqz=?4l)&)>X eW3MW{(n#)!JtQh[)NO%\6E1I>Hl,v""&$IeE8Np.O},]Q?x[WuzU6#tJc@2wv]ۓ`d"T nW+*cvP&S+m\MiN$s?it; $g D_U JIr /֙1yb8͗ȻHX نYjujLm`'MK E(5-Tea+'jXr#)T9HMR~i`v_NH _ `pKcH6N 0΃׍<,>^͍uOw*Ws]|VIP4w b oM~ڳiTfhg N LCl~U2& % ‹DL8B v޷?qk ._"I>5J.3'쉂Gg˙^D7! QO[7qg/>q}yݏ bo]i%}r> ` xƎHL!@R @З"F]we_E*JHHTYhfD4gqV*& AD2GR~R 9 +"(@ea "/3Q9#j)h`RPAxFyNu [27d&J!# !JKTp?1̲L I2`C[c@55Ҍm\Jq~DWqjLTmMqlJ1 p<ў9^xKMf@o? ׵҇t-?ʭ PnmrkfT,S &SL5̡T¥Hh$3Ƅ.0P (sV }Qkwzc1$&Nn;s =w,B)^欔&erE sKD0if*Vjj.Ly2jfgXHYC0df /P=DS3!,`l:Aԕgd&;(MS )XRxd}4!Ct/¦k -T` D2&$ZA֝Зr Q̏Em3oQߛӍyxP_#y$DlY3,&D1rjB5JE3ʞ=ܞ|բxmno俴뺆siXοN;_^|t5_V˧݁y4^`>}n 0&<|9Zojϭagc/#ٔqYrxW1ߌtDr6s0N^"nM!p;rz]{S. m7gVMP;1jzd^3k%u=&WW1SU#oPktsf*U":̹Aq̒&?|#ҝvy@ B8\K-pG6O5ܦ46~Mŧ2JڟXqY~wI:*P0 K_YO&  `pz;cNc ׎+p<׻i1C$ME uncv~77yC[DSý5ka`ܳD'C3̃Ol;Z&"A' 9.M;)g& ݓ$ ߯Iz0s ۏ;Z3͈h>'G-,IEQb<;lHjtxƊ!dL$;pbsGɕmtV>{_+w GpSX9})4GPovk&R瞙W9YsZ ~hH͜I~ؤ/-vRInqL!p 03o RudITl_{ иߟ?nKug4CVMgY]rmӮͤh"6UM}:Gm\`}zjA َ䴪V]TQ /=7%4)YOa081}eSg+Ѹ>i>-i-DJbeDlM1q +RFԚF-$0K@jPxL ʓ^ar*d3$ ]/IO+ZPˏꩾMɾᯜg4C6Mьx35ā  !-dÐ^Ɨ=2 %Z1(%`|V~>s?^OyǶ2 N ɳ QH鄗<7S*&GyfbËpN'GtjkWZΚ&z GGSϛ #q AGo5A <(<˙^q޴Q#mqRssԝ3醺6yT!8eP]X°!IWl\lеo\00+.!AL잧 erٟ cb3͸&NjJ]^t&х\P csqY1e"/\߈PºP3qYqIsEJ$0!|arI4KPOjcw9>͍BUp%QRB2RaMfH%U{Ő0B|B0NJJr395.4ںt2Xb3,essVHI7H  LJ. ,3De☃)Bt0 P"qptk 4olOY _W_?=*V$ U ?=7 VK+u?+&N.{`~~5VwwO7cS8~~7?s?ڻ~qϫ5mcD/AP,r7K_. S`ׂKqUŭtG1Swe4~;x0Q.hSxW zP䱜dכ UxQf 5փFM"9 LńdL2䄪L1(sӇ浱?nYf}PkH .0\famo$CAEJv~vшb0GٔQ8|Սg3$=y>t]X^n'-!#SNp eu䃎AحMrxNAGIpN 8©ˮ;N0$}ꁴ@꧟+%;~+%<RCēQA6bC#LO7~;דYR9ydvPܛ?dn20[US23T7PaEbQoŎ[|1ؼUg2G !`*z 1u׽HQ>|SZ!d/Sh(PQy0]Nt@;ѝ=koGЗbd?_C=,nl/w )1Db;W=$H3!%9`XaO]U?ό[E7$ emml L2%5uFg:!W.LC4 9E)CUM%ѺupfRd gW7/Gh ~xMD{ݦtDZĶV %xO]L^)zW*xrKXK 'r`jgxF??g+e_Ms}6(v5zx 1\Qc#oaiN@c~ cңzKm`T]ch$ 2dU٩L{҆ d! Ph!2\C6:@pP!%@(YCِ+QrFKD[QFXrM9l ܔh] 'k̀@e~\v _ IXJ"dbˇ17&ȘP4zR' V؂#p7\9Yu,wAn F8qtɮY=6K$̓ Xfh/ +OLҎlRHhmx Uųv(w~}uC>&1~XM.K?_i&i-cPYX&ʪ]hy$e-Vc,K=EbqOb8(1K0Y.Fvű\5l+,}"{+xP <i7F 91D.l4V%-N@eʲ114Q^ufC WU>9 n3ZK$ ?I07|.<ሚҬa!1lD.P$9Eٿz;pW^w^\n(o'%YᲗQYP;yv2U\=I|,.np>C]:LPKld+ A}Y8_V--d̶C]h~zZHw:юwQ6kAPK U1Ptm:HIb<g@EPQFM6 Ԟq!e Lu$j-$Ceۘ}FH\:BXͤDsbP<"7>;4qTR ]0ܐq.E?߽&*"H3(|0l=m݅Fw;s?<Ύsly[KVkH_ e_{]aoY|>9½xl3|π9*N=1kB l)5 ;xsϚ 煐E (%Qz%S0aݧ?٨1Up0[t@@0Q+;6Ul:]t6F=ئh6hf醽_uh~i <JS&L&dArS93[5h~s6320s:&A_}CfɬvGX8 ӌ#kh tϓ@^yЊ :%G򪙚p,rcjsd I[Oq1eoeYDl97`ur*K s~$cϔo!Z,E,E,E,u\f)I2sY`s<-y9);& L@ &Oz!<Cł7_ogiI/z3X&@-7}m(9@>LMb?h(0y@7OڛFR:p5NTJ+άCZ\fIx˜ -Xm ;n8(׺#\Yf4=]pK\NKZs0q2@HWi\dk;t5v%G OxPd1IS[` =iS߈E{5 6HZ~p.ďW֘VxU_䍔=BUOE|xOho8pU/IqyAz幡0@w:r53WQUyH-Z/ڱGB_=rSd1A*UZDmXvd;֪h?kɋj4%9Ty}LX٠[Gʑ.4ggF6ynk';Vo R&A!yedrP; 'R" 97Y*xAu &XWm!}Rߡa:+D$jh\иqQC'y40 혎 W2dKi@)W#*z듈=/0aH| FF"BBs!/aN3o]*f-dÄiZQPԫ7~DlҰmoDCa 14М8-Ir%Qr, eQt!+a׭܉T?<2c>&AiYg B jrL[Qְq[[?)IhŝI/[QtXB\ J'-iN+ S0f8'rTVPmhHBFiHf\n]̅e~+ ~0Y==00c_[/#w߿[ |a`0R'W;3N"z)~ ~3)m`bgVhťFu?r|v@=ހHØL0frV8LW5.Uڵ\#Gd@nLJ\Q!seck芺;5) K[Ȟd}-zMre=(*r^Ko1 fQ&!g`d9 ߱p c̴@SFUŢ'xk \wǢJ#eْ@P 8-VpǼِMJ4 TLb"x0D ɨ֪QNuF|K׊ +pzQn\_n̡+>eW8`:Kq,8M^"`02T]5Q]feYՐ ۙ0ǩllj "g;&ӠmlS5:V~zId *e6J,E'l$VJΦ{\Rj{T-!fs~jZoA6MqE@F?18xt_9ɼdxޜ6T?fp1]7…Ii&[?<0(좦("!m';~ ;!|ȔRVDjtaˆK5 Ğ૪n?FB qVO+*-(E-'iFOR~0qwS<˜j&J^@U5>\HXun(Ɣ2#"͖&%vuEIfa83zCpT9f\&<!xOYڌFb 5e*1Y)n.$A-`/`vdBt]ђԪtgV~P߂vj(۷%?W݁ 'f߄e_hegSQmh ˈ&hUwo3 Ο-5i _t}Лxy+3DIk9q,h|Vh~fA&NtWK?;O+=I\D)OzPX#>bv?-[?xa^YU6귗ɕ}y{3n.zp'G3Qh?Zz9 ջU\C=yY\"2OvW6~PE"-cG_^x@v1:ijC* (e2污&{ xCƍ O;cn"0ߵTWlOUi@4R1h.Z+Ǟb9؝ف΅-eZP]kkɻQ5;Y :v4ѾyGSG#5 zdK}3$;u(XYU*RY~,}uWmJ%$Wڍ˧O`lQ[.gX|ɹH@{BUr;(:n1 ewi9Ғ bnt;#SzJKJа[-ihwwf}'~G6-0}#B&//vڀ0]S'2: 5x[Qs'gK?ỞCLW4)L63Og O=YLwaL;.2n'|[ڍ'n]iDtv{:oTѓi}j>$w.dgqnk7CvJ&mSFps2ֽ`CևEL50T:"hJ?:H|ys^@47@ ĭHZ JRcp@A(٧).e! __Xq)!jKE%IA2aR0ڳZg _LHe ta>6^ 8%&8x <׌x Ѯ"m+mz?o\y {eJUI(MRüOKEUZ\}2>ޒ6IJ<A2LʠXi'۠yj]f %F1.xc3Ѷ:O . }6*J7jtg NF9QϗWrdԆ^ThBbD+#Gq;_˟k`~fYOإ",2@I``3ٗb]]r;88I4rL^[Vj98"3`;Bd9|(њ 1,XxFdp򝙎LP9% 2_ޟ`9h!o"j-BFPP&d VeS;s@Řs!YQy..'.gy2V VĒ@+kdu p^޴޲Кȑˢ-|#͑ j=/x.0)$)Sb̀jm6s"+m1iDoTqr,lRPͦK<:X"O ^"b5 o|p%;ȕ|RhT>eFIĻP5eQ AȈ/bJE}~ hG.APwq bL"!l%*NMu))$SRBji0eyز'gRj0y^H ,M'F˚A3R`+@"%h'GU{4K}工޽zBny<3N׶g$K|fYK*(Q *#26"e;[-u;-myȃ;-{}ϝN-<o Q,LK򑱊 =ek odwa=Eê+z=R Fsb.&)VN.0/ aB܇dX=[o;CnЙ/|&!H'[MB2w _;!n!—zr; 7'w1;H۞]$ A}1 'üzuvH}3uiKN Nr'Fwa=X5J9-p~8̛njKЪ>x16cM4-f̻!'Ԍy' N RI;&U(卼DTD;'2dC1!DҖ̖ _$3Ҙ}+3t_Y(@,&z$,&ZJB̒NB FʄU6x&ՠ@a-RIA%̡Z2^&Y HFWB`"y԰U2X'f([b ^1"x}X #C/F"{9:PiՀBXjzmD oF S)E|A2MB"VX$k-N+rTvU88jSKX Mq1\O/| hGz.J_,=7 ytΛz7W# K"Y"Z)]2nF[`>o<*(;r˗'wFZIKu^q#>Rd;pTw(s IC"/vھVm_C>)eG3 ^@F"*Q"cC,::4HG(<;J(FǨ"~&%?ݓ'a'Fx! l4;)HPj= 9D0e.s^n3YI+aY9}b.hd*љ(rV̀rgHG|(t~Pyo9^}Űr^cD}^]RVYE ül9 (vڻ(-CQ Fn/\SYZ]L4JhDQ Zq|O 鲄Ez.ʲ^TAZ mF !sUm\翹7PAvY˷UNv_iؗU1-13)˅dzÊ(:0#-Z99 DY[ *K/SWr> y1 ^霌@,,I:hBH Ɲ9VoڀrkՔ5d^g{cEQzJ:Ro|mZ\/&Qt~w{x#=h=3R$}T5^ @y.9\,HRt`/ƛkVz#m=#ZkaOe^oȑKPdk9) 뵆)){1@*hZ^ry#"!x $n1"SnX$FxT'1GypUm TVe-V6V1TNCԾAڤfحY>aܢT> Ț6re G$Ŏ*PumgP)s\ܲ"$Pe`Lcƣ=OFR*.m5j/}_M(X³rB)mms; `fkMyP7ORV{7m}Ԏ!QaDhMJV@P5Nis@3MBတ oy/ݵgq sX#޹mdfHکk{X3[.-n ؞ {֬cBśH9L.} rIDRg3.ST LgP{͋lZ2R>x_Ю;ں[iOדz9OǫեX\MRj+L( &<99M~wR@?``)EY.bnq< lWo{7"6!um@Q7gsmoD{cWe ڑˢЗ!ثhR9yT{Ck1ҷ]J['R΂^lTRr3CCESȱ@9s;#m9lq줭*=D*l eE%؎ӱKZuO\NKs{jSQՈ8-Eek&5a.oLIg Wfꤍ={&`^{;"n*IYmF,4_r^4\1C|ߧwDt~ibٟㇵs~z~{X{`/t&Uyph4]PƳ<?<*Xl8hG[NHK:3#ŸÔ-v/eа1Mln~yNٴV_ *0J5TdTkʝEumO֨#!|䓳)SxJtvm?孟R:`R@g營/v ޡ[l@P;hH34`hndf9ߜ=I¿gf7D 6Yy}2]c{3axjDr/ɽ󫏟dԢrl'Oޔ曟,UH̔=ZW4ō#/fl=b^ezpxq*b tY W4ibV 'uc)|ωn?>nwm{7)Vmwlv EH-8ԛ"N5%o[P~A]T2@wMw '|>PGev}[Ց]ϮNk6Вǩ?*{4F}`g=89(x=|'zc;t%s@NҷPL*:u{NOO/hNūC7<ۘU O$lM֙ UkMdu*;(CPMV#1ːIy3^=PtʌF7v-z{刺*G72|dH ~12c9Ugu͠MP=ύkl,ц]FT8C7KyP$,_ӟy3Cu3YO%ze_O6zxB貵q'~}tAe\/d<ZO3@75/3^ftJ9d<{n'X _6#鿢ZxƋ|nݚݝ~_X3l?el(*Q"@}X%qy)qQ0߇?tƇ dBiV[Ibjհe v o,XZYCE/V+)i7cY[ܭoKyÇ(&D/}Dz!mЍ晬ܛo{/7Na]",Q~m}Oۥ^ye?[yh3w[.wr>]\]^FUGߗf.]PUDmzvxn Ha5 $ScT FZ%=lEoG1g5񽻿XUy6F3Uװ&z6y_^d~TO\|Sd~>U /[*$0J **Z¾ITaoCmwOW}ӡ*-7STng||ݧ߁!5g!=#.re:X,ҝ#V.5gTߗԥ愼 ?W^?&riO6t]"K9;b/KhK?9cu~-rZL-m>DU0wsbQI:t/̪]֣Mrg[~دv*=֏vWӿf" ~ T>g'|”G>2֙wl -V!Љ}Gv՗W->-лQz+Η (n:N;xc"'xh@WΘ/ B oL0Y,n˯]hpΎZFTl Ͼd y;W,vk7L1Tf+*ěUiG+$V~Sa}!jrXi\dď| \M2l\`9-Lşpq^?\]^R&gWJjuYw:ln&-Jޛlbn\q\题J{QgVC=M7o3xR@CܱQe"Lru3Sn:#BY{OY=ޭ+/g-s1ivm@x3JJ u*/x3 xJpF  h(a%QyYձvճ 39rL(:y&r . GR` zb4nVM߯wf7ّ8?}ݝ;vްWc35F%pDc f h WCSRk#zh)j+AdixQDnQQY\x=pQbC-۷du_IERS]DVO. yV>!&x=|8[T͈vՙ;4vpK}+5%y|=g"JC)3q5oRs12Qz( z3|(ECi)5ר(=a!!(uf\J9ۺwdewf8\n>j4(N> {pSr^(:1VE{Vb.4DјI[cK'^,hmoh>_atu85bˣMuE'J]-܋(.l/uWEWVڇG+W]g:\5yr*< ŋU˝khih"_8ƽ8"&|T YecU*aZ}ߪ%MT%LST1%Ә"e)K?SAh$- lY$j)y\_P㕵Y|W A/ܒNPU[9G1v(l\,R?cc-ivJ]3c T[jj|BiB"8˩I'oH' _e=3@*n2 tr`GX/cT(򡷮cԯ$"7#T1n41OjC6o(Ѓ@yk޿Er [9P[Tb{́`n10\vp55/AkjT7cheQŀk2 HEO􋃍P:ߏ _d3,G 3MuOXSU1%"%ĺ}ļZF:)y.iFpuoR}0R9#Tq*;+#0\XIh`I:Dk"əa$hVyBS~AuUlqc갱UJͅ`ucS8w(oI}S+(R@:zF)0Rs-(=EbI`3>@ř|Q*hPƉkgm5.M]j-y?mved8&JH7RgF)Wa(媤d(* Hy( Ԣk(gJeAb cK}SnFiYkWF8PJeJK9=i02:LGQWn7NzX&!-.M]RV@&PV/QK"E4ѻÉWѭFIJ}]VZG!\ܮG)sQdjw`ZoNLTJ=x_펐y I^X,Qg˴ '0a(8iaV䄧 kn1/`AyoSmtsDMzm?"y[+I+db2S.fn{P !hL3ae (E`YZ.-l !Z1D:Cj2]Hjh rͨգ"e$fH O#fZe YK:iCTH'Ga`TCy3M):]^ z%!JRv4=O3RϓdfL yl@ I--<Ri+iO.*JvbH լ?N10Wܭ<Ԣq4-jxk'Ͼ0@"@/x);J`'PwpKJwfvToL铂ևoiީj*<}0s/߷yx7.HHJwsN)Ӈb!vjJP C+ މ,m Z)GYgai}$~h,l  ;PkãU`A c&v.JK}S[|,}6Q~,gU ͆c&겉,2be'[Tf]^q`}bu⪾d*g+|aVZ̀S:*gi֏KQrOz6ĻzY{=!+hwLi5P.&v3H5kL;!OhdڑW%fgRg>+l Ƒݎ< )T];]yAuoRRqFI4,M9!i&GҰ|SQYJ9idQ0Rs紝F)@Ju8P RjQRTa(2RJQ=QhPX!úuojR.=m2RVr> 2Rj.y( #\Jg Rw2N"G.`dFDqj7H  3 Cͭf!gL dd[#χ\Nvs9tŵ E.FX?%\6Y{p??,s8-HʸuF/gܧĸq.y4Ʊ{TӃ\n5D:w@A_>p _)oÙs7Syͧo|3J):HaJ,EŦsrݛue< =$#@]xuq 02H[|t롵|z?/~QZ1ӜLU|V~r,!F8oj_ٙogIU8 k&U#Yɾ3BGJ8FZ% )n4!QRMT0$H䌳`/PH(HB4E"J*u(fb1gr H gpcZ)ȑ(LXbb"x*>+ t}g۪;=ޚVeC2PrH/`f 32nuI ,(Ja)EEL 9I8J\ d/uxzvX0a HBP8CHd /`2JV)r "d%UnFd*3Ȕ;H@LEj B]bvZmmd3\Tx3m(`Prђq3uJP:03 6ڧJ%/jǍHh7ִc0')Q y)PaMDX=W?\UG/_+A3"(PSO߻﷼o+R wPB`Vw*kqi7?1.O_M3vac 2j""!(+r ]N|;sM^8ziBb/w8!Db&ƚCꉽC0z崗ЩR\(*ݰkq0#Y¨U.U#@aDT{Ӳq88fW||n 65nmlk-^-fKZnf1`KmgQ.8מ-uɹ[j{A)AMp{G{.>2ɢA<`T(~xzZOz=h@!N~׌ +W,|KݩL7l#ңQÅu%UV 6JXnxN.5HAM]j;10y &9mOMӇf_2yQRts #zX@'m]֦`->'0лQ_qֽQѻ :hݎ6yޭ|) Z!aj% )8K^EŠPJG]=qkm mO:7O۫O T80lm't 6a#ڤ$Ѡ]N`A;WmL2](+*KK-HcyNbjWU6V=W?tp:|)E97XY I\`[䒲Rk2qX [JI$ǝUǦ"ϨV %H+JlB3 NeB;QÍQcHh4Za]IkAF@*DX]by3 ~ H>Dbj\ɉ~4S}OsŃN@NKz3w,D|:4"5>~)8]i?E7L %$c0ۂ<0|baGSCH0+,yqFf()zӵ+nSd@euBȾu]un|qX.8>urҎFf'F@jPX{pN8Ņ΅-+{b :i-݊bt:gQɄ$#bY bqS(/~ VPS'aT9R:6{!l[ɛ#k+|H^jNӡ1*ZKKRւ,\*P^5¹p7츣-SBΐ u$c_wyq_ݵwvC_+_6s8~e!-wޒun<`_/ݒ傔Ù̪=Cw(nAu;2/=W՛da&O?Y}~r/oSk%. U45hfPThR Q0@[,O`G)Y3' x-ە]RoCi?wZAi JA5e(2#22NEJHJ=YRy[*Bp+Bxgm,i ,@`NYV򢔙 0s؅szn)TK-ui7]J]wuRm|Zoێ!<5>IEFңQR`ZáDmFbv]kwzDs=i?L.wm`pb_~_Wu,OKcȔӜM%FIK`3<X+ y&35K_(Y,u5^~ f,:bId:òzZm50oZ H _AX^Lw'!aL3ݝ8hJƸIc3ICo\+;9 A+J%3aUvKmr%w[,) q^dY) X׫.V3nm(#dm^RO@iS" DRd6/x%Ы c,̭JPKQVZFM]uϓr^8 /z|'3w2Z-zWl~ݖ(&̄PN)Kd4)-gTBgsg)}_Y&@]@)JiĨ4vC1# I6Ӆb1gORM{'d@jl]F&Q#h?%KKz6Aj[x؞3åtFSY9Jintg,4>GYdLx{HEHuˈJjR求VMyu?_O%*P ?;I֠ʉv;W_Lݷ#G:"W/0۩(J. ( ,9Kg LJsTd&9|r WM7S?|j j!/F ]*W'XxD &8]?Z"a CeJHR4W͔҅n Tܪ5 t[x|w+ǗPyXR㻔To]G~7dDPM?׽ܢ,p[QT0ߕ%BLIN5S,ktTm|p$K E-DBA Y,PnwY*`n2#u(?:D1<{=_@*3bWR&$;'g!E*|ӆi&[ތ@%~)E:nG6Pgfԟ BTm̢ ^@xc:P:CA:8>jrn(5B^ O!߇01<ɑnw0V#if.+q'\ld?KɅz31\-> +byJQ-7Vjƨr8>VY #u{KnqVtADg--u{{Lݾ|w+6 u{E#Un*T/" Ǻ>0!/@]FP/ ңF)Sa(eZ Ld'5JP 0sЂ8m}R _s|ےJ^5w5rY&'UOм*ruqXvb'cd\Wl $(4I>Y?}Ś8$V&y &8:o_VhjcM- |."%\gZiI{ޒVgn vg OX^|7/_ڧeޞ1iݓ&lR^+s|X7ޫU'o ^QHN&W/!{ߧ8&"͈R#Au6]i77]~`MsSA Ǵ҄1& }Z /ʹN@OͫuQJ1H1FrNU{õjO_GqoK>\Z99T q@L!Z#s g ND?c S!+(2lGp&܎R.JaÓ6<|#p{IXR$>^ʷ%p O&p;PnGYTuNd!v,ՎS8"@0yhUp|MJS]1 py :@!H(02\2|KZ1:z/`d}pu113DgSk꙼b6!1D˄HDK?%p;UX-S]7 np[Rw=$af%&{G$?5~/&NF(PxJ+ G0n;dcLQ4n%58q7JyFRWR NǍRKAҍ-j\#nfs_ҋmItP]^ AkmD]|wxmшz.DM@XCM-aI?wN^\-L 6'H, ,Ceiq$ͺv8P ޵6#Ř/U,LM`͗afO <-?Z-&lb"WSb:-;}9̂p1c)qb &/ޠ+wsIrڨ^6hmH1YWLl m=(qZBCG609-adc 'a(qa^.8A^b@$`Ow1͋F-~ҢuYZUbtPB\i(hPp׿tP4uNA5B9V;}3G d-\aUhiJ* Jʟe9!1p%7g\&B6vu;$[k"5k#@ʞ]4trt߹fʞ]䨕izW Ӄ÷1L35R![tgC@ b6eR] ׵.!Tkl6㨖LnHEt1|Ba A;D?+)#>Kv#pkpP3 ] [pwט($RG&9c(#W # =$y&X*›lҢ.,+( KTEnj |(=FHٯ4u`GmC\/Uh./s/hh~-f!WwpZ~}X}x<`o= ~?t.a~'_6HQK"(. S:QS@-c 2IKd@!KLY)V*S%"HY弮wlm%Z $#0 k&[k%cȅMnXa"5K|nsn;f_֚r11j? F3A>š٤g3o>OQ2A܌Q2!mtk_퍷<cۦcQߎ&IK) E[Hɥ" 5 Iwh"sf$}<&7,t]۟ynu%` ulȖwB'0檈?tfHa`Lc:#@p ݲe~jrϢu-}]IdlZSK_kjΥ9iQk)8-E>=R4T%Fr=dRmd.EI)5С2gLG_\ꪴeYyQqQѼWW?_%=xRRK#=-X(i}F#Aà=٩XK9x H|'d-ԣ:9%i#H7KF01jՉʺTdhaԅV**UblaQ1jC Rl$؇" E-ln#ZVwb K[2FU+#.˖HF%0W#wdk2 `V;%T+@ӎ4wK$%r^kYܿX;XFȟjK#aܴ@sQ2G4+_L)t휗R97[=]ZDUwwi RQT'o-MˋGT*"lYB\3٨ MKcg43KMwy\'AJ!q$ŋ:K˳v49P#I0g$)KzIēhc9xld3<^rhcď1EWL6*L{o=y?R|ו[S{LC{Gwǔ~%Z0z2:L냼[cLP須LB^'kX|P^l%2D+ 2ST0H^&JnM(Sa`VnTsA&! "ju'ݻZǚai:jE0%U@S(Ԣ$,H. UŢf(vaZَ1c]Z>yP!*Iy0Q"w".@EdD'\:-Th׊"CA?؇1AN{5,3-csfCzrupMi:dH2RpB>-_/¯ E}[*Ydm"hhUKNl{'&@:ZJxSA*>VE#eKV Z+nSXqK~f, ͜%~KI9lTtL qI!(/4,Ys^J8{ܥp :p_BȥB*Mqg 6Rf4IBxݡgaI8{aAh_yMx9j-.W.|lZGKk fKݟ5אXO0X!IΝ)~~P4 y^7"jO;fܤ&!?EXszhqL{yopWxgтC!JaQ*RUA]-)F̋Zޚ$,`qDNI GbPI"  - ("Ƿ>mDT͉rfɀmgDž#9,oO6/Jڞ o38qqGҨ' N#<a1OO.d &QCmebO'Q0gP N\æ2Gr.4*C, a:a 361t 9wR\RȵpU]ҕ!7񪬑T(^9MNa"C~\SS,hgo԰Y?<.?Ly0lm*@?l%OcDs[O&8k;^LpHz!p~Z]IПjzHҌ}<F}G v߄2y[f:d|(o *S0v$fZ=DAdw;P姓iW9cNgĢ!\Etx-E -16)"teYH6|pS&6wpiU =k(WjFk /B[k p̮#tG8c7G)Ч}=XHDP5ݭPGucG G7ۇ 7 OV>as*ZCfv1u1K-Dݙ1P!jGͻe*dĕI Afk##sVqXªꚡH5Q{аzY1VD3\ռj߶6"v7bt13v(#trnݍ3F y))@'Z(1s'K1'2֢R==bK !o8tXWcM0Z0d&k8!JL^O(ݠ*6/At!xuo*OP"k:q}?W0燙HPbf U*dX-K‚I ,3,ϲ ioWrp6w?W}0>nkhleʅ'z`KMBi..Asg8:4ù8䐇1;P`3>@=5 z|*9 ]dYyQhS$m%hBʢrf=) :ٸ(i&! UXTU9,B i)LtTXρ0ر1v -mIX167]ِْG" |~> [ !՚$Ҩ״3[U临r0>|Cegt.[WnpσzjUz>컯i|Ky#.?}O?N3₻?J ? k9ZrcU?j V͆]DTƵQ <zX !leuoím[M{60M,2,dmj ⍥fiصF,m[Lt,7BXKPQ{`+/?s^mkKR<*G{AW/ظ(J>_w5*}xۧ5-c.O?rJSw:to ;K9%F@OL܄cŋ.-u ,|_~kzOx)uGy#z=P^gV7>=ёUXBm_:_dp(kL3}2keA6W_+ZkZ!'4 b1uƛgb<Àr7DxIъҩZp֐VY2fYc]dqC璘^OY9Z|_g]>ٍn%Owe|~׵fZO%zp6d@)/*9GlڦљOePڬD0פr*Ji(=X ݧU zkAh flbA2V̔*J^VC:TIfCĐ!h6=eb4&ǒsx`zޏ*hΔC ]Ø(+a :L8~r3 |}"L8xF@Z0D?SFK )њUd`#͢S:TՀᦔkwO:ԉ#4O:+&R9NIhIEmՌҹ/3DMݤs{G2s :ܓ4oՈ&Νa$lM$~ ;Ucyt KKutnáҹ;3uףkܢz !@ lay0:Hص27}[m5ތҹ}1%9H(;C9EsU8BI*VI5ւbZƾsļi;}g%XDU4x. }lKoۖA yj v;4{qLAs(Q]sxrꈒٱsXvi5=J~mo.}W6ѵ0&u#oa2ӲJί}ڢ?UcS>ōs Ӱ9D7ĎϪ^Q"@#3Zb2;fk6gddm~]('[jyJrbՒVqbb}")@RʣqB. $ 30.M7ﳨR2\O5$\V ֖ :.}YRs hB}IUޡYU%B/&849qsv.]}=tCv3T(H)=LyaCI}`L&3Þ!O3iB+]?6RW!<@ HA@z%ݣ{-vӝNɪhd{zr/6c%8 d5X6yM:uzq#t+\;ML?fR7'G#4מ] u3:n^-l5 Q#wMAwݬ]jD8o^髷DZPZEG(VQ[ V"4 t{^R: y)#(Z \DEkݘ[]u%RҊe]m2l,]1KQsK`eoVˀ6a5mRa,۵fiXP.I]KRǽ\EXFbjSiNb_[nR+g dM&3a^:TweX3ecojݨ3@ްS}j/l_='oı[1[.F;9rOM8-S{Q1C? [t>bD#S_|ҽ A8%L-{BO^DBJiy].e\J)r 7o<cD5;@/jA.:QDrhkV䜳"D,&Q<68{N$QD5~S n}3GƤ(E%tam HtIz8g {x !iHH[i"Q{Ꝍ,.Q53/jŴ%eG0w}' kA .A$ը[nqN 5Ρ_%2L(xyC{f>Є@!'3ĩ]NvX!@j7l![H]PD|(=_^*1g{B_)yJ\Ƴ95!y]exz9wXdN -J@O29uV".3mZo!hizw2B4jͬ`̯'$F֓8 KLe`?6h;I`_Z6 꿞7O=vz67⪦w][<9~{Ź&/ŵ5?xj~I /w%, 05બtV0hasγ uJPfY.-ʪ*_|k]p#~mVwC"!NkdD1,]c.LB*ʗɅ UpN#%H$pCnaCo4"J 6?#5ȟmX*k rA P1MSZvSAjQ妬dQIK0Q3ivz7Aq4@ )!:F2՜g%!ɥᆽ D@UTv+J%U;y`OaROa泛N ì3H9$`0 ; &09xnN.?z$ W09}lu>L}`;v|J\Ʃ)026I07ޢJLDR0-pûK3l 4`!-NyqC{j>gs6lEMGQ85GB`hWL5b4 ]#?n@!V%ۦ0Vl)n*=)ᐲiAFw tZzv'Gw8:K?=)O9/¨~\0qw,2}L1$ax+ aIK&0[l<ɥQ0$j1_%tR f ׆Ca;W%L@9:n^Cp8<̡K!CT&C ㍸gB`G{ O2 @;;afӋ܀D`#aN/a䟊:w>FL{ĒQgpwQ?/W[ŏ7)/ SgLԦRt="gda[nm$6a3H7 †!AIj}p`7S)䡙3boxe L AJɼѬ!6-$yJ47e\ݖ0RӸE7!4Πq?q#Y0suJ4 pE;Ȕƽ&C=/Q㻠 QE:nyh TicwL q{Tz,cqWlEH+pOC9kCAυ$a5g^J(ƫ/ͧUq-˪!ݭ["2%wgYj/*5=?WXtԾ"=Gq_ςRmM]3<CkL_ERcl=Q.OFҐ\Egb)!֦V+2sk1rNhRNh愪L4n4уLfWܵҞ޹Gheg"ghfMu<&פa QK$Gapwـ;kҩ*=KDbH'/ty_|yv4ld5w&lV3bD]ƙEۥFaӘIWFd$9[ir;d`|N3g Dt&:hg~~sl>M0c$߁$iBIw-O6O%'c8Q^i(6Iiu'&@'1d%i % Y<3l4PSrrx7 gNϥ$xXXO񜢋`%Ug-ku.Xm!0ES KzD)o9oS Dƍ0Yq9l1Z0e)q6!Y-R!В ]͌t |VVt TS31vgk+ɩTY BfM0:uK.Ty[UTD fӬLs fS:Վ4RߎZϞ}GsB2k<(z3!&8y;5,紿(#xS#QZﻌF9wZ;\^R_+W)0]p ]=$1JRDu%:؊ŷĶ9bȣEczW%EcAY+34V3c: Kbz6~.WX#p}gW5GNl?B`Kud0r mÕp3j8Mgh5 6r`Ƥê APZ3.`<Oh-5ZH)6VG¸x nӑ;ޥ u]!VOqo?8S<]|U?jqM }h{qoWq '}g7__/qx _h2p)WlnGtjx!F )O}:&̚Ù,yE_ŲL1J| e"Ki\Tx^II3Z!~o 2eYx ;)|—1?)q(|w(]u>%ívMD<g?"P&,O.v">&HcV1ƵVʵ=o \239LiKo$w5}ZA*\?}NaoˬI&KHVjX*D6dA92'9kO5'`̣A~fW PGG%H2m[VQb U*a}k LT¨ҌPmS+Zvd|XpB+92MOQj9oC[f5[3~Ҹ97J=QRxhlKk -XCtkhDlQQ[t W-0j!ӦבϬ,6!-SzTMc)ઑ9X;i9"uՑ r9[iaG k9c)- n7JqImW^ 21Qm2#b:vнS1Y28;- 3YEG>)|1٬YKݸDf ʥ$o&ٰԱXK`a55N;NqI9jV%Ts2h lt| c60:X uJGހc6=iUcPTZ z%:NMB-5TJrʮhDj̣Zy)^"W(ܤTq֔,mMu .CC2r @lj bSMև2_~O$ah rXJ1 x'c^@΋{2Vj g$$(!gD $x_FB7RhC&ACt$>gdAx)@J1hd`؀\6J8jwm˃F\!D'ިL[!6wZJ2)|&d H'W%! I# 5<pKW\jY;&J3aqU6s.k\\$I m Vb OqiLēftPc@  f*^w ׬b&#N_dAؓJd#LOxCI_2#='}U+ I_r]Z5}_oD,_Gg8?ru{;lyZ2N-ly([QϾ:WM *%2TsuLK90(<ߕ0B #1?| N2r둿 i$%g>BS&e sn{R}R gn2n͌9|j? #cd6ͻ-@rJ6J TκԺ+*c8C/ \߬Y+~o]{Ǿ.bS{F=Wyfy58sI%n<|I¢gY6]gxoiݺC=u XٟoZuģ;^G-eކ;ӏg' xBһHoo}Qc=d(^&?]Iŗt r}N}~2ICsS 8g|tI2`9AI(9UfVPI#C5&/A5ILiONw"AoRI'pZ i/(Lp{8Y\iD.Ο a$?h]@,q"UbSrt0qA8Zl1l,h:B+[Lz3ϴb;0YI!jj[E擠P5zCڊPF[`i\^5j? m@ׅyOU5l=̟<]Q{o\!y0xOCuugӈUU|6*/U?JBiitU>`9zr)n'cl"w]ʷ */Ծ9Lܵ}s|mN}-Z'wƐb:w]@~?\Q V˽}qԨi9D`z Y@,0mt*Z,XLBȁHLcy6ػz; ,[-r0jQ6Lz.S=,[ߓ 6|*ZS4|gmoӺq=ZZN;J\$^oVXiА\EktJIq3a<V˃թ}Gu;`l|ڧu/G=Hk9 E=d{+e .lekFP-h)T}cIRdH)MнME?4zSu~(4EuN)^*zTW/r`/HDz6eЬ 5Xg.)߷BQ.,6N5#e包oO9U2  }fbQ6JuܤaAL*`WޤGȋ1OJ%V#ݐ!}QT1w( Dr]i ܙy]ƺF7E(U`p2adK-Iv3FdA*-U]˪J ,4:Et k(H5.rӜQ1ը2D|jM )"/LIy-mSU֒259SfQg?ѩ\ة%|ةE0! lڨ܊*Q嘓uu4һ%z!k(kߦˆ huV/}E(U&$}[^A z#)WQ-_{lPfA(K< VYiPGG}d.yJAV^ڂ0gڎn5I}K ״&7,s@/yX)9Wjre4s Eȁks+GLAղ`Fىfִʌ3r,CFlWb#ae,M eT-ŘjW84L'܀ۣN٢ 6@o=q%!dSkR;Y/ĵg-%帖;m6Ӌ[p-Yrc/Iw9+¤e?-$Ǥ;vaҍVkǯɳ*prECٔCjr}uZ poaَ7Zq:tCUԩ:E$~!|$CMxCY`/ =`yuG$'mZ)3RuFKF }z^iZ/=r`MhJ}6ubNFjWi'|vbI.q` |yl"vK:߈nk"IwKqaz)r)yGר9oHuHX&/1Bw)+JRkTw=T?VVwxb ץ>6F)x:<@RDݭ(rpn^k2|Q[(cqn'75XFpNviWF yIĐ^S]Vyį! w(8-}BU}hym50!5v*92$^Q2lDxomdׁY, SDJhoV,Di_ebm@m-.)n+T79U3׆(vftB5Z}μAMDQ\^sV#B2TDir Ǫ uH.J4dTthQc\ QHY++rƲ.M4u*,jlFB /9$ -;E&J1!];娬(AQtȭkR~k(6KU\Y3T0x5#Q}gg:4tB5C{PVhCr-S[E5"#s_R+R4MIuo8s!j'3H1", of9FM K&(14~{&l~kdS뀬WV52hIx~rBNZ@+}HMY^ֵ*EZ*@m7VZ᷽΂^TL3A3|TѰsș؂x{/2L Dka:а'/4^ui2I]8zN9RV<) wGH}lނ=8֡٬)8pûjIRQ[j00qE+"#}jFrNMh|$p|cRuN7bۊѝǺ֋y w!7$LùwZnûoyvަF.Rӈ-!7us{(ӫ/V! sU9\v(R xCd3Y #$P!/v4S[uoqeץ>Ǖ2qx;9 }p,wY d^RY1&5}ѝuSG '^2@):$u=#B"5kǾԒQk"s&x%k&PVj+;RQ?dυ-Ǿ#;JwRy(UmGf(UR,{9Jyæ2=yi)5f>JA1)*bYAWeiuYeQBhWTN\N96 Z5uh_G!;;<)aFnO/ӇOcPwΎgȤ*Ijjk^1_xLҩZ$-DSQRq,7cKIbb;Ր+e!Fqv>ʨd R(Zzj.G?SCZYC+u]$tՀ*\[9NE%Ce.D~Le49")1;[FY!V{:?~'~5__No<>ОYNNAka&'!d?~x(~zF)>փ8?#}@ﻈ<|á6uzz' J k%jm gtZx'+A&vݻhA<+&oJûSy`xd Sӷu?ECPv:W?>کonmU"YyzG?=}?iU7OCs{zb**VJLƌq|8s(:򪑦WU8[)8j h4H_MS#(vkɒCFi2NX!ef"d+|t\:HA٫$-z2/@yl֥:xl3 hTX~DpAKe&wyhr3LU LCug5()00JňV8D\W^qW"ohXsŘ+ʢZi3y⢓%X/J9vemv\\fYT{]+0`g!=T Ľ+pShOh2yؗj-C57IeLxLqP!lM/H05 5rp^ },/var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004516035615133672223017713 0ustar rootrootJan 20 11:04:30 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 11:04:30 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.739823 4725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743580 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743601 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743605 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743610 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743615 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743620 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743624 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743632 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743637 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743641 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743649 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743653 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743656 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743665 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743669 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743672 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743676 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743680 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743684 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743688 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743692 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743696 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743699 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743703 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743707 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743714 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743718 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743721 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743725 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743729 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743733 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743738 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743742 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743748 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743754 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743759 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743762 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743769 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743773 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743777 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743781 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743784 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743788 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743792 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743900 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743905 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743910 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743914 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743918 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743921 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743925 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743932 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743936 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743941 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743950 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743962 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743968 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743974 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743979 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743983 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743988 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743992 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743996 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744005 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744011 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744016 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744025 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744031 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744035 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744041 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744045 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744407 4725 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744510 4725 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744537 4725 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744546 4725 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744557 4725 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744563 4725 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744573 4725 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744588 4725 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744593 4725 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744597 4725 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744603 4725 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744609 4725 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744615 4725 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744620 4725 flags.go:64] FLAG: --cgroup-root="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744625 4725 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744630 4725 flags.go:64] FLAG: --client-ca-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744640 4725 flags.go:64] FLAG: --cloud-config="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744652 4725 flags.go:64] FLAG: --cloud-provider="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744660 4725 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744675 4725 flags.go:64] FLAG: --cluster-domain="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744681 4725 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744686 4725 flags.go:64] FLAG: --config-dir="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744691 4725 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744696 4725 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744704 4725 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744709 4725 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744716 4725 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744727 4725 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744740 4725 flags.go:64] FLAG: --contention-profiling="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744745 4725 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744970 4725 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744987 4725 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744994 4725 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745004 4725 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745009 4725 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745013 4725 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745017 4725 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745023 4725 flags.go:64] FLAG: --enable-server="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745027 4725 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745041 4725 flags.go:64] FLAG: --event-burst="100" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745046 4725 flags.go:64] FLAG: --event-qps="50" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745050 4725 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745055 4725 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745060 4725 flags.go:64] FLAG: --eviction-hard="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745102 4725 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745107 4725 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745111 4725 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745117 4725 flags.go:64] FLAG: --eviction-soft="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745121 4725 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745127 4725 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745132 4725 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745138 4725 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745143 4725 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745148 4725 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745152 4725 flags.go:64] FLAG: --feature-gates="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745159 4725 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745163 4725 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745169 4725 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745175 4725 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745180 4725 flags.go:64] FLAG: --healthz-port="10248" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745185 4725 flags.go:64] FLAG: --help="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745190 4725 flags.go:64] FLAG: --hostname-override="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745195 4725 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745202 4725 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745209 4725 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745214 4725 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745218 4725 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745223 4725 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745227 4725 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745231 4725 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745236 4725 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745240 4725 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745245 4725 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745249 4725 flags.go:64] FLAG: --kube-reserved="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745254 4725 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745258 4725 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745263 4725 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745267 4725 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745272 4725 flags.go:64] FLAG: --lock-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745276 4725 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745280 4725 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745285 4725 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745295 4725 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745309 4725 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745314 4725 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745326 4725 flags.go:64] FLAG: --logging-format="text" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745337 4725 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745344 4725 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745350 4725 flags.go:64] FLAG: --manifest-url="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745356 4725 flags.go:64] FLAG: --manifest-url-header="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745368 4725 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745373 4725 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745379 4725 flags.go:64] FLAG: --max-pods="110" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745384 4725 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745389 4725 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745394 4725 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745399 4725 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745404 4725 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745408 4725 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745413 4725 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745434 4725 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745439 4725 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745443 4725 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745450 4725 flags.go:64] FLAG: --pod-cidr="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745455 4725 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745463 4725 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745468 4725 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745472 4725 flags.go:64] FLAG: --pods-per-core="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745477 4725 flags.go:64] FLAG: --port="10250" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745482 4725 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745486 4725 flags.go:64] FLAG: --provider-id="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745490 4725 flags.go:64] FLAG: --qos-reserved="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745495 4725 flags.go:64] FLAG: --read-only-port="10255" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745500 4725 flags.go:64] FLAG: --register-node="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745504 4725 flags.go:64] FLAG: --register-schedulable="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745508 4725 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745519 4725 flags.go:64] FLAG: --registry-burst="10" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745523 4725 flags.go:64] FLAG: --registry-qps="5" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745528 4725 flags.go:64] FLAG: --reserved-cpus="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745534 4725 flags.go:64] FLAG: --reserved-memory="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745541 4725 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745546 4725 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745551 4725 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745555 4725 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745559 4725 flags.go:64] FLAG: --runonce="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745564 4725 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745569 4725 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745574 4725 flags.go:64] FLAG: --seccomp-default="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745579 4725 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745584 4725 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745590 4725 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745596 4725 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745601 4725 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745607 4725 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745611 4725 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745616 4725 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745621 4725 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745626 4725 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745630 4725 flags.go:64] FLAG: --system-cgroups="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745635 4725 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745643 4725 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745648 4725 flags.go:64] FLAG: --tls-cert-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745652 4725 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745660 4725 flags.go:64] FLAG: --tls-min-version="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745665 4725 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745669 4725 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745673 4725 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745677 4725 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745683 4725 flags.go:64] FLAG: --v="2" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745691 4725 flags.go:64] FLAG: --version="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745699 4725 flags.go:64] FLAG: --vmodule="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745705 4725 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745709 4725 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745914 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745921 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745927 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745932 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745939 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745943 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745949 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745955 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745960 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745965 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745969 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745973 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745977 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745981 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745985 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745988 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745993 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745997 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746001 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746004 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746009 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746013 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746016 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746020 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746024 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746028 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746031 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746035 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746040 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746044 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746047 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746051 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746055 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746058 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746062 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746065 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746069 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746073 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746094 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746099 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746102 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746106 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746110 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746114 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746151 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746155 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746160 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746165 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746169 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746174 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746178 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746182 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746186 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746191 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746195 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746199 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746203 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746206 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746210 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746214 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746219 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746223 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746227 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746232 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746238 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746244 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746249 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746254 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746261 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746266 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746282 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.746300 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.755942 4725 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.755991 4725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756114 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756130 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756139 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756145 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756153 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756158 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756163 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756169 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756174 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756180 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756185 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756190 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756195 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756200 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756206 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756211 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756216 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756222 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756228 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756234 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756240 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756247 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756253 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756258 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756263 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756269 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756274 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756279 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756284 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756289 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756295 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756300 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756305 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756310 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756317 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756323 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756328 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756335 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756343 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756349 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756356 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756361 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756367 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756373 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756378 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756384 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756390 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756397 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756404 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756409 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756415 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756421 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756426 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756432 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756437 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756443 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756448 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756454 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756459 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756465 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756472 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756478 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756483 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756488 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756493 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756499 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756506 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756512 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756517 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756523 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756530 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.756540 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756710 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756718 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756724 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756729 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756735 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756740 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756747 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756754 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756760 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756766 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756772 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756778 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756785 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756791 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756797 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756805 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756812 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756818 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756824 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756830 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756836 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756842 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756848 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756854 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756860 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756866 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756871 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756877 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756885 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756891 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756897 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756902 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756908 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756913 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756919 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756924 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756930 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756935 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756941 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756946 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756951 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756957 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756962 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756967 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756973 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756981 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756986 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756992 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756997 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757002 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757008 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757013 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757019 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757024 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757029 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757034 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757039 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757045 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757050 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757056 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757062 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757068 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757096 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757104 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757112 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757118 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757123 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757129 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757134 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757139 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757146 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.757155 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.757639 4725 server.go:940] "Client rotation is on, will bootstrap in background" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.760749 4725 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.760865 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.761477 4725 server.go:997] "Starting client certificate rotation" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.761507 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.766723 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 14:29:51.546099922 +0000 UTC Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.766935 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.773106 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.777216 4725 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.777614 4725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.788288 4725 log.go:25] "Validated CRI v1 runtime API" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.803333 4725 log.go:25] "Validated CRI v1 image API" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.804889 4725 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.807742 4725 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-20-11-00-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.807777 4725 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.826752 4725 manager.go:217] Machine: {Timestamp:2026-01-20 11:04:32.825123566 +0000 UTC m=+1.033445559 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:38403e10-86da-4c2a-98da-84319c85ddeb BootID:6eec783f-1471-434e-9e46-81d4bd7eabfe Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a5:5a:0b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a5:5a:0b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0c:ba:c8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2c:9c:20 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f5:f4:84 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:93:ba:44 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:d3:cc:a9:15:45 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:6f:1e:cb:28:dc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.827037 4725 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.827292 4725 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.827947 4725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828163 4725 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828211 4725 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828459 4725 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828471 4725 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828728 4725 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828760 4725 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828938 4725 state_mem.go:36] "Initialized new in-memory state store" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829042 4725 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829824 4725 kubelet.go:418] "Attempting to sync node with API server" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829848 4725 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829874 4725 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829890 4725 kubelet.go:324] "Adding apiserver pod source" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829905 4725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.831992 4725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.832536 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.832610 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.832597 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.832717 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.832984 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.843367 4725 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844299 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844332 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844342 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844351 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844366 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844377 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844387 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844402 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844415 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844425 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844458 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844467 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845014 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845583 4725 server.go:1280] "Started kubelet" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845952 4725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845953 4725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.846592 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.850790 4725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 11:04:32 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.852340 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.852404 4725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.852984 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:51:32.011257667 +0000 UTC Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.857440 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.857659 4725 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.857677 4725 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.857738 4725 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.856254 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c6b9c55a2a206 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:04:32.845554182 +0000 UTC m=+1.053876165,LastTimestamp:2026-01-20 11:04:32.845554182 +0000 UTC m=+1.053876165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.858167 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.858196 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.858322 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.861393 4725 factory.go:55] Registering systemd factory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.861436 4725 factory.go:221] Registration of the systemd container factory successfully Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.861580 4725 server.go:460] "Adding debug handlers to kubelet server" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862863 4725 factory.go:153] Registering CRI-O factory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862885 4725 factory.go:221] Registration of the crio container factory successfully Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862955 4725 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862983 4725 factory.go:103] Registering Raw factory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.863003 4725 manager.go:1196] Started watching for new ooms in manager Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.863689 4725 manager.go:319] Starting recovery of all containers Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867667 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867722 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867737 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867750 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867762 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867774 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867785 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867796 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867811 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867821 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867833 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867846 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867857 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867872 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867883 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867894 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867905 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867938 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867951 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867964 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867974 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867986 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867997 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868008 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868019 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868031 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868043 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868056 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868111 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868126 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868138 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868148 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868160 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868173 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868186 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868198 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868210 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868222 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868233 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868244 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868255 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868267 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868279 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868292 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868304 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868319 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868337 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868350 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868366 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868382 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868396 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868412 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868507 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868525 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868539 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868552 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868565 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868578 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868589 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868600 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868612 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868626 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868637 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868649 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868663 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868677 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868689 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868700 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868741 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868754 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868766 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868778 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868790 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868801 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868812 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868824 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868837 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873735 4725 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873832 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873861 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873888 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873911 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873969 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873990 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874010 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874032 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874052 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874073 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874121 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874140 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874157 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874177 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874197 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874217 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874235 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874251 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874268 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874338 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874362 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874379 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874396 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874413 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874430 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874449 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874464 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874492 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874515 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874534 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874554 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874573 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874593 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874612 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874634 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874653 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874676 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874696 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874713 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874731 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874749 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874765 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874784 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874801 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874817 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874831 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874847 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874864 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874879 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874897 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874912 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874929 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874947 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874969 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874987 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875006 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875024 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875042 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875058 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875071 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875136 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875155 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875167 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875182 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875196 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875210 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875226 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875240 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875253 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875271 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875283 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875297 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875310 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875323 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875336 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875350 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875361 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875375 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875389 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875404 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875416 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875430 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875441 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875455 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875467 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875480 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875495 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875508 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875521 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875534 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875546 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875567 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875580 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875592 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875605 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875619 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875631 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875644 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875658 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875671 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875683 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875698 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875711 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875726 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875739 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875751 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875764 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875776 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875789 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875802 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875814 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875827 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875840 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875853 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875866 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875880 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875894 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875907 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875920 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875933 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875947 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875960 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875972 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875984 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875997 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876009 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876022 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876037 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876049 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876062 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876076 4725 reconstruct.go:97] "Volume reconstruction finished" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876112 4725 reconciler.go:26] "Reconciler: start to sync state" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.882811 4725 manager.go:324] Recovery completed Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.895105 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.896670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.896721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.896738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.897766 4725 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.897793 4725 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.897817 4725 state_mem.go:36] "Initialized new in-memory state store" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.909224 4725 policy_none.go:49] "None policy: Start" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.910486 4725 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.910558 4725 state_mem.go:35] "Initializing new in-memory state store" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.928983 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.930936 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.930972 4725 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.931029 4725 kubelet.go:2335] "Starting kubelet main sync loop" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.931250 4725 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.932072 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.932178 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.957577 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.980744 4725 manager.go:334] "Starting Device Plugin manager" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.981345 4725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.981373 4725 server.go:79] "Starting device plugin registration server" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982056 4725 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982075 4725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982444 4725 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982571 4725 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982577 4725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.989855 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.032175 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.032414 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034584 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.035045 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.035122 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.035984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036238 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036503 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036541 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037295 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037499 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037537 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038355 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038416 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039909 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.040004 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.040870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.040955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.041060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.058768 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079114 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079234 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079286 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079329 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079375 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079420 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079460 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079593 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079646 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079674 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079695 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079713 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079732 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079753 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.083551 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085410 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.086036 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181129 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181165 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181194 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181249 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181277 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181306 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181335 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181341 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181364 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181392 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181380 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181436 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181444 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181299 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181408 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181474 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181489 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181538 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181553 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.182752 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.182966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.287115 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288277 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.288774 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.380536 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.401357 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.407069 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97 WatchSource:0}: Error finding container be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97: Status 404 returned error can't find the container with id be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97 Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.423722 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53 WatchSource:0}: Error finding container 1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53: Status 404 returned error can't find the container with id 1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.431021 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.449912 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005 WatchSource:0}: Error finding container ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005: Status 404 returned error can't find the container with id ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.452853 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.459442 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.463048 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.472986 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b WatchSource:0}: Error finding container a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b: Status 404 returned error can't find the container with id a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.689134 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690328 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.690744 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.839652 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.839939 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.847592 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.854689 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:23:34.62189153 +0000 UTC Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936324 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936391 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936460 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ae01245f715e7a85876f2d515c21f8753ae5352e8c3e5016674943b533d5ccd4"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936538 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937493 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937618 4725 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937683 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937794 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.938834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.938873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.938887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.939871 4725 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.939936 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.939956 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.940037 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941523 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941918 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.946771 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.946833 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.946876 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.947050 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.948212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.948253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.948266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.951168 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.952013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.952046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.952058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.260445 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 20 11:04:34 crc kubenswrapper[4725]: W0120 11:04:34.294364 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.294442 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: W0120 11:04:34.334809 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.334883 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: W0120 11:04:34.460635 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.460830 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.491001 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492543 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.493311 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.820448 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.822108 4725 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.847380 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.855351 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:27:30.890299221 +0000 UTC Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950773 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950784 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950862 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.951707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.951730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.951738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.959259 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.959896 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.959954 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.962818 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621" exitCode=0 Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.962893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963019 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.966932 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.967007 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.968330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.978487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.978544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.984962 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985012 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985027 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985138 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.855932 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:10:01.052319694 +0000 UTC Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.991827 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092" exitCode=0 Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.991883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092"} Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.992467 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.994148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.994212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.994235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999467 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b"} Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999530 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de"} Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999470 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.000764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.000816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.000838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.001814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.002115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.002323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.093543 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.094959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.095121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.095156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.095202 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.856977 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:29:45.778583141 +0000 UTC Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007115 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007186 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007223 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007254 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007350 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.008098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.008142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.008158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.858443 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:10:05.759115011 +0000 UTC Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.018214 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.018216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c"} Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.018226 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019457 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.135170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.859645 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:06:04.40167717 +0000 UTC Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.992016 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.022298 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.022317 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.793233 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.793543 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.795175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.795219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.795236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.799385 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.859871 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:45:48.355899048 +0000 UTC Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.025620 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.025759 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.027186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.027255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.027269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.066793 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.784992 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.785411 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.787182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.787244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.787261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.860891 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:21:08.734732409 +0000 UTC Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.028272 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.029260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.029299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.029309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.861837 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:51:10.153713618 +0000 UTC Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.030805 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.031920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.031962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.031975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.296224 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.296499 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.299540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.299583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.299594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.408730 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.862498 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:28:38.476715192 +0000 UTC Jan 20 11:04:42 crc kubenswrapper[4725]: E0120 11:04:42.990040 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.034282 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.122599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.122732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.122763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.127005 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.870387 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:56:50.002605137 +0000 UTC Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.870933 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.871977 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.875256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.875325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.875341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.114124 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.115129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.115185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.115194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.240388 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.240618 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.242883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.242910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.242919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.870800 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:14:55.583325404 +0000 UTC Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.409250 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.409350 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.848581 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:45 crc kubenswrapper[4725]: E0120 11:04:45.861903 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.871090 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:58:28.467703875 +0000 UTC Jan 20 11:04:45 crc kubenswrapper[4725]: W0120 11:04:45.946298 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.946443 4725 trace.go:236] Trace[2044783135]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:35.944) (total time: 10001ms): Jan 20 11:04:45 crc kubenswrapper[4725]: Trace[2044783135]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:45.946) Jan 20 11:04:45 crc kubenswrapper[4725]: Trace[2044783135]: [10.001578506s] [10.001578506s] END Jan 20 11:04:45 crc kubenswrapper[4725]: E0120 11:04:45.946474 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.096566 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 20 11:04:46 crc kubenswrapper[4725]: W0120 11:04:46.317779 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.317861 4725 trace.go:236] Trace[892702363]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:36.316) (total time: 10001ms): Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[892702363]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:46.317) Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[892702363]: [10.001270546s] [10.001270546s] END Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.317882 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: W0120 11:04:46.621027 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.621157 4725 trace.go:236] Trace[94559695]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:36.619) (total time: 10001ms): Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[94559695]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:46.621) Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[94559695]: [10.001895965s] [10.001895965s] END Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.621193 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: W0120 11:04:46.735203 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.735313 4725 trace.go:236] Trace[1818064882]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:36.733) (total time: 10001ms): Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[1818064882]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:46.735) Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[1818064882]: [10.001541063s] [10.001541063s] END Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.735340 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.871351 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:00:50.871769403 +0000 UTC Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.344141 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.344191 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.355539 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.355588 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.871802 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:24:56.316932802 +0000 UTC Jan 20 11:04:48 crc kubenswrapper[4725]: I0120 11:04:48.872369 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:57:03.474293561 +0000 UTC Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.297343 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300314 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:49 crc kubenswrapper[4725]: E0120 11:04:49.306121 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.873128 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:28:00.557087909 +0000 UTC Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.407293 4725 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.791033 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.791839 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.793708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.793738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.793756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.795907 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.873803 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:54:36.701888586 +0000 UTC Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.134136 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.135121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.135276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.135407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.512216 4725 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.874792 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:53:03.958923742 +0000 UTC Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.342861 4725 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.353612 4725 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448115 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46402->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448149 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46394->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448489 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46394->192.168.126.11:17697: read: connection reset by peer" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448387 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46402->192.168.126.11:17697: read: connection reset by peer" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.450871 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.450948 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.496289 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.496476 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.499278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.499327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.499351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.500693 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.584200 4725 csr.go:261] certificate signing request csr-pnmds is approved, waiting to be issued Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.603368 4725 csr.go:257] certificate signing request csr-pnmds is issued Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.609712 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.771351 4725 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 11:04:52 crc kubenswrapper[4725]: E0120 11:04:52.771981 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.102.83.194:35108->38.102.83.194:6443: use of closed network connection" interval="6.4s" Jan 20 11:04:52 crc kubenswrapper[4725]: W0120 11:04:52.771992 4725 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 20 11:04:52 crc kubenswrapper[4725]: E0120 11:04:52.771975 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.194:35108->38.102.83.194:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6b9c7b4912fe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:04:33.47721907 +0000 UTC m=+1.685541053,LastTimestamp:2026-01-20 11:04:33.47721907 +0000 UTC m=+1.685541053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.875815 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:15:55.897358648 +0000 UTC Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.979057 4725 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.141458 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.144267 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b" exitCode=255 Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.144407 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b"} Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.264435 4725 scope.go:117] "RemoveContainer" containerID="809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.740018 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-20 10:59:52 +0000 UTC, rotation deadline is 2026-11-10 05:23:34.8368344 +0000 UTC Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.740075 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7050h18m41.09676263s for next certificate rotation Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.876570 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:02:58.946389832 +0000 UTC Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.876623 4725 apiserver.go:52] "Watching apiserver" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.879802 4725 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880107 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880439 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880511 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880514 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880540 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880778 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880954 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:53 crc kubenswrapper[4725]: E0120 11:04:53.880979 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:53 crc kubenswrapper[4725]: E0120 11:04:53.880999 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:53 crc kubenswrapper[4725]: E0120 11:04:53.880953 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.882642 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.882868 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.882951 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883039 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883059 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883782 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883957 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.884180 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.890599 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.904964 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.905142 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.919539 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.924444 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.934713 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.941908 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.950538 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.958872 4725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.961321 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040227 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040285 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040309 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040324 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040382 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040401 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040442 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040458 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040482 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040497 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040512 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040531 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040549 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040585 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040601 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040618 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040670 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040686 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040701 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040718 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040744 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040769 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040801 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040828 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040844 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040860 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040877 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040891 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040931 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040951 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040967 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040981 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040996 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041010 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041024 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041096 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041110 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041140 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041155 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041170 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041184 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041214 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041230 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041245 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041260 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041288 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041314 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041330 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041346 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041378 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041393 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041408 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041423 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041439 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041454 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041468 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041552 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041585 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041617 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041648 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041663 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041680 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041710 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041724 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041739 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041754 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041773 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041787 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041801 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041815 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041830 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041845 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041860 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041877 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041892 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041908 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041923 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041938 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.042387 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.042676 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043341 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043465 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043570 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043676 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044002 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044018 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044230 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044481 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044500 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044654 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044786 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044997 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044995 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045016 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045200 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045354 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045672 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045692 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046451 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046501 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046638 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046683 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047030 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047287 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047348 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047568 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047592 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047626 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047643 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047659 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047674 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047690 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047722 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047739 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047770 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047786 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047801 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047845 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047866 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047959 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047987 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048004 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048019 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048035 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048051 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048066 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048102 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048117 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048133 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048150 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048167 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048186 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048204 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048221 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048238 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048253 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048268 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048283 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048299 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048315 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048354 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048375 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048392 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048407 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048422 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048438 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048456 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048471 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048493 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048516 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048532 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048547 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048579 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048595 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048612 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048628 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048643 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048659 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048674 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048689 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048705 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048723 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048738 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048771 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048787 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048804 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048821 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048874 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048892 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048909 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048925 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048941 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048961 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048978 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048994 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049010 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049027 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049043 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049059 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049106 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047517 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049142 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049160 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049206 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049362 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049435 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049769 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050869 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050964 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050977 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050987 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051043 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051114 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051142 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051251 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051278 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051295 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051303 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051327 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051353 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051375 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051397 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051444 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051477 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051501 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051525 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051549 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051569 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051572 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051669 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051712 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051731 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051769 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051804 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051839 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051857 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051874 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051910 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051977 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051988 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051998 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053839 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053854 4725 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053863 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053940 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053968 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054035 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054063 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054100 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054116 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054133 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054149 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054164 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054179 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054196 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054213 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054227 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054242 4725 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054256 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054269 4725 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054282 4725 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054295 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054308 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054321 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054334 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054347 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054363 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054379 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054392 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054387 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054406 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054423 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054440 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054671 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054785 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055309 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055345 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055454 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055691 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055787 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055836 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055861 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.056034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.057353 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.058640 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.058729 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.058850 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059045 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059305 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059345 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059401 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059761 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059932 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060055 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060172 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060171 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060199 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060458 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060759 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060960 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061103 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061347 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061556 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061783 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.062145 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.062167 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.062848 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.063206 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.063217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.063815 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064036 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064106 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.064222 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.564194229 +0000 UTC m=+22.772516192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064568 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064737 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065326 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065555 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065576 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.066289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.066736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.072656 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.072984 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077282 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077508 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077706 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077790 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077887 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078058 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078275 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078284 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078416 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078473 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078571 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078707 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079338 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079532 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079722 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.080126 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.080460 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.083536 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.084001 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.097196 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.097504 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.097976 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098241 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098419 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098582 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098752 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.099217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.099535 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.107158 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.107674 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.108190 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.111147 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.111372 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.117184 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159276 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159373 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159444 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159681 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159868 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159889 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160070 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160580 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.161131 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.161583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.161796 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160458 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162118 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162235 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162350 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162113 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162435 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162876 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.163587 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.163950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.167914 4725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.176281 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-c9dck"] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.176693 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.177424 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.178692 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.179405 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.180398 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.181541 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.181931 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182257 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182275 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182285 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182296 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182308 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182320 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182332 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182343 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182356 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182668 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182688 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182701 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182714 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182726 4725 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182748 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182766 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182779 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182801 4725 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182818 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182832 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182845 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182959 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.183358 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.183871 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.183912 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184178 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184346 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184361 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184372 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184382 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184392 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184402 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184411 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184475 4725 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184501 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184510 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184519 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184528 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184538 4725 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184547 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184556 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184566 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184575 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184583 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184592 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184602 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184611 4725 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184620 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184629 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184638 4725 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184647 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184657 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184666 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184675 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184685 4725 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184696 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184705 4725 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184714 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184723 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184732 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184741 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184750 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184759 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184768 4725 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184777 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184785 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184795 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184811 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184822 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184831 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184841 4725 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184852 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184863 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184871 4725 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184880 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184888 4725 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184897 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184910 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184919 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184927 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184946 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184954 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184963 4725 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184972 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184981 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184989 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184998 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186205 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186452 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186479 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186706 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186951 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.185008 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187138 4725 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187148 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187157 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187167 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187176 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187188 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187198 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187208 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187217 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187227 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187236 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187245 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187254 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187262 4725 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187277 4725 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187286 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187295 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187303 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187312 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187323 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187332 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187340 4725 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187415 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187473 4725 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187486 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187501 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187515 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187527 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187547 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187567 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187580 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187594 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187611 4725 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187810 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.177450 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.245744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.246624 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.246650 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.246633 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.247585 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.247665 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.247881 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.248406 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248415 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.248463 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.748444813 +0000 UTC m=+22.956766866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248624 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248804 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248925 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.249337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.249309 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.249617 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.749482414 +0000 UTC m=+22.957804387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.250037 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.250310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.250477 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.251222 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252297 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252749 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252850 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252830 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7"} Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253114 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253485 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253495 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253568 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253796 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.260995 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.266732 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.268853 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.269224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.270757 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.277072 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.277855 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291697 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291729 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-hosts-file\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291754 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szb2t\" (UniqueName: \"kubernetes.io/projected/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-kube-api-access-szb2t\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291798 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291812 4725 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291825 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291839 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291852 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291864 4725 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291875 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291885 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291896 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291907 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291918 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291928 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291940 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291951 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291962 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291983 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291995 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292005 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292016 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292028 4725 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292040 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292050 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292060 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292071 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292110 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292122 4725 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292161 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292175 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292185 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292195 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292206 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292217 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292228 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292238 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292248 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292259 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292269 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292279 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292290 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292303 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292316 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292326 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292337 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292348 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292427 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292492 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.305361 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.316912 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.316946 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.316963 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317028 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.817006915 +0000 UTC m=+23.025328888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317393 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317424 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317438 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317493 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.817474709 +0000 UTC m=+23.025796682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.320818 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.335223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.337794 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.393718 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-hosts-file\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.393783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szb2t\" (UniqueName: \"kubernetes.io/projected/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-kube-api-access-szb2t\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.393879 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.394613 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-hosts-file\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.505716 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.545429 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.546242 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: W0120 11:04:54.623500 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4 WatchSource:0}: Error finding container e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4: Status 404 returned error can't find the container with id e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4 Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.626286 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.626877 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.627118 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.627099794 +0000 UTC m=+23.835421767 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.651447 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.654680 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szb2t\" (UniqueName: \"kubernetes.io/projected/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-kube-api-access-szb2t\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.701265 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.716096 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.729162 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829444 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829522 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829505231 +0000 UTC m=+24.037827204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829445 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829734 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829765 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829756318 +0000 UTC m=+24.038078291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829731 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829799 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829816 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829831 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829846 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829855 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829854 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829840281 +0000 UTC m=+24.038162254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829882 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829873092 +0000 UTC m=+24.038195065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.847177 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.856141 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.871160 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.873057 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.877009 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:21:20.966895562 +0000 UTC Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.881633 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-z2gv8"] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.882017 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.883920 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.884264 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.884299 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.885546 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.885615 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.886156 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.896525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.912096 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.927984 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930235 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a4c10a0-687d-4b24-b1a9-5aba619c0668-proxy-tls\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930271 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wsh\" (UniqueName: \"kubernetes.io/projected/6a4c10a0-687d-4b24-b1a9-5aba619c0668-kube-api-access-47wsh\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a4c10a0-687d-4b24-b1a9-5aba619c0668-mcd-auth-proxy-config\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930498 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a4c10a0-687d-4b24-b1a9-5aba619c0668-rootfs\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.941675 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.942566 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.944869 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.946340 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.947488 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.948799 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.949480 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.950146 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.951377 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.953183 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.954465 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.955093 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.956593 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.959606 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.960456 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.961207 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.962443 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.963308 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.965434 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.966144 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.966936 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.968579 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.972695 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.973622 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.974508 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.975402 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.975470 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.976188 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.977620 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.978127 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.978818 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.980287 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.980805 4725 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.980938 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.983073 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.984553 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.985116 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.987705 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.989169 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.990313 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.992134 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.993356 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.994615 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.995745 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.997559 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.998713 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.002653 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.004350 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.009164 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.011768 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.013914 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.015022 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.016396 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.017668 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.018752 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.020538 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031606 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a4c10a0-687d-4b24-b1a9-5aba619c0668-rootfs\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031720 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a4c10a0-687d-4b24-b1a9-5aba619c0668-proxy-tls\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031772 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wsh\" (UniqueName: \"kubernetes.io/projected/6a4c10a0-687d-4b24-b1a9-5aba619c0668-kube-api-access-47wsh\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031825 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a4c10a0-687d-4b24-b1a9-5aba619c0668-mcd-auth-proxy-config\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.033738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a4c10a0-687d-4b24-b1a9-5aba619c0668-mcd-auth-proxy-config\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.033863 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a4c10a0-687d-4b24-b1a9-5aba619c0668-rootfs\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.040390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a4c10a0-687d-4b24-b1a9-5aba619c0668-proxy-tls\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.139459 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.163660 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wsh\" (UniqueName: \"kubernetes.io/projected/6a4c10a0-687d-4b24-b1a9-5aba619c0668-kube-api-access-47wsh\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.168385 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.222256 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.229250 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.258093 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.259717 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.258876 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270375 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270543 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270625 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270791 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270901 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270967 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.271002 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.271332 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vchwb"] Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.271537 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z7f69"] Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.272220 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.272476 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.285306 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c9dck" event={"ID":"a3acff9b-8c0b-4a8a-b81f-449be15f3aef","Type":"ContainerStarted","Data":"6c3f9addd3c4256b3c39a76dba36771cc8c2f4ec5d1302bf9430f42ebedeffd9"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.288154 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.288194 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.290639 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"f4495cb2afb253ce59d4073c3d3eb7d2e4b170d9dd03dbd86043d5f30460c780"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.294618 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.296264 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.298447 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.298655 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.298746 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.299449 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ac496f5dd6638280d62a86ee01e73bd5a039738c60595ff3ab669f5436863a26"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.300754 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.300770 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.302511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.302566 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c54b5992f3ffa538b3496e7eb0c81380a4563755475136c9c8892df1c3100765"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.367474 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-etc-kubernetes\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458567 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458586 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458605 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458641 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-os-release\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-cni-binary-copy\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458669 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458687 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458703 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-kubelet\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458718 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-socket-dir-parent\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458733 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-os-release\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458749 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458779 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-daemon-config\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458797 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458832 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-system-cni-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458846 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-cnibin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458874 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-hostroot\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458890 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbnsp\" (UniqueName: \"kubernetes.io/projected/627f7c97-4173-413f-a90e-e2c5e058c53b-kube-api-access-jbnsp\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-system-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458922 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-k8s-cni-cncf-io\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458955 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cnibin\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458969 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458983 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458997 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-netns\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459046 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-bin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459060 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459089 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-multus\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459119 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459154 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459186 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-multus-certs\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459239 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459277 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q68t4\" (UniqueName: \"kubernetes.io/projected/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-kube-api-access-q68t4\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459349 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459382 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459398 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-conf-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572286 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-daemon-config\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572611 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572658 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-system-cni-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572688 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-cnibin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572702 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-hostroot\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572717 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbnsp\" (UniqueName: \"kubernetes.io/projected/627f7c97-4173-413f-a90e-e2c5e058c53b-kube-api-access-jbnsp\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572733 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-system-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-k8s-cni-cncf-io\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572763 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572776 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cnibin\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572804 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-netns\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572868 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-bin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572882 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572911 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-multus\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572953 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-multus-certs\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572990 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q68t4\" (UniqueName: \"kubernetes.io/projected/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-kube-api-access-q68t4\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573006 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573034 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573066 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573116 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573136 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-conf-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-etc-kubernetes\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573169 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573212 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573232 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573247 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-os-release\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-cni-binary-copy\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573287 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573301 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-kubelet\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573327 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-socket-dir-parent\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573341 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-os-release\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573354 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573837 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573896 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573935 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-system-cni-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573993 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-cnibin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574065 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-hostroot\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574155 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574347 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-system-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574376 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-k8s-cni-cncf-io\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574411 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574434 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cnibin\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574460 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574615 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-conf-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574688 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-multus\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574715 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574725 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574737 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-multus-certs\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574766 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-netns\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574823 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-bin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574959 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-os-release\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574977 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575031 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-etc-kubernetes\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575113 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-daemon-config\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575536 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-cni-binary-copy\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575587 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575618 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575633 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575696 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-socket-dir-parent\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575716 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575722 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-kubelet\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-os-release\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575885 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.576271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.599800 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.711141 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.715133 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:55 crc kubenswrapper[4725]: E0120 11:04:55.715318 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:04:57.715293226 +0000 UTC m=+25.923615199 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717532 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717768 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbnsp\" (UniqueName: \"kubernetes.io/projected/627f7c97-4173-413f-a90e-e2c5e058c53b-kube-api-access-jbnsp\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.719481 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.720325 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q68t4\" (UniqueName: \"kubernetes.io/projected/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-kube-api-access-q68t4\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.735940 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.746425 4725 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.746709 4725 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747803 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:55Z","lastTransitionTime":"2026-01-20T11:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.061976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.062039 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.062967 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063123 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:51:20.430914914 +0000 UTC Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063155 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063169 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vchwb" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063142 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063225 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063249 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063286 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063323 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063345 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063390 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063367 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063449 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063461 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063470 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063476 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.063455055 +0000 UTC m=+26.271777028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063521 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.063510237 +0000 UTC m=+26.271832210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063549 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063564 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063575 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063625 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.06360979 +0000 UTC m=+26.271931803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: W0120 11:04:56.091641 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9143f3c2_a068_494d_b7e1_4200c04394a3.slice/crio-841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2 WatchSource:0}: Error finding container 841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2: Status 404 returned error can't find the container with id 841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2 Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.139558 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150508 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.189398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.190269 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.190785 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.190570679 +0000 UTC m=+26.398892652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.201549 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.235381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.409934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"b2417ad0dc80b5b1ae4121d1bb3e00865d148a8b7a5961fa3babe151601b99d7"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.416263 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"293cb950a6f3068b98caed1152bca23ce692d80ad5274feae968cc50159c725f"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.417860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.430889 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.431356 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443534 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443610 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.444730 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.456188 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.457682 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.460843 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c9dck" event={"ID":"a3acff9b-8c0b-4a8a-b81f-449be15f3aef","Type":"ContainerStarted","Data":"18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07"} Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.463677 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472223 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.474054 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.483486 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.483601 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485863 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590395 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.597506 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.609588 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734414 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.749305 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.775882 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858752 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.865553 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.895925 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.906258 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.914686 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.926263 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.936281 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.946213 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.957110 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961834 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.969326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.989756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:56Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.002010 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.025306 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.053472 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.067523 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:42:01.327327958 +0000 UTC Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069786 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.073899 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.105196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.122142 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.140143 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.152315 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.164777 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.198038 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.239422 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.264131 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.377952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.377995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.378019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.378037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.378049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.476053 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.477262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.480364 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.481051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.481876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.481980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.482375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.482755 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.482525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.483712 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" exitCode=0 Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.483770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.569617 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590213 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.615210 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.633160 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fv2jh"] Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.633563 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.635711 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.635912 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.636198 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.636892 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.637778 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.663190 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.702793 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742802 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.758191 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771571 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771702 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4k2\" (UniqueName: \"kubernetes.io/projected/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-kube-api-access-rh4k2\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.771734 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:01.771693552 +0000 UTC m=+29.980015585 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-serviceca\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-host\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.779072 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.791733 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.808835 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.822995 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.838955 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846923 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.855059 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.870119 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872712 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4k2\" (UniqueName: \"kubernetes.io/projected/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-kube-api-access-rh4k2\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872759 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-serviceca\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872782 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-host\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872903 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-host\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.874386 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-serviceca\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.883824 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.895208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4k2\" (UniqueName: \"kubernetes.io/projected/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-kube-api-access-rh4k2\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.909607 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.932153 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.932284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.932699 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.932771 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.932834 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.932906 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952805 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.975501 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.979111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.002210 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.035610 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.048818 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055430 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.069434 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:47:49.869167134 +0000 UTC Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.071352 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.075026 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.075072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.075178 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075346 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075367 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075380 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075431 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.07541384 +0000 UTC m=+30.283735813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075734 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075793 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.075775591 +0000 UTC m=+30.284097604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075864 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075891 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075903 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075943 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.075932226 +0000 UTC m=+30.284254249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.092398 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.106159 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.120217 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.130931 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.141984 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.154610 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157535 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.167980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.180942 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.194612 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260894 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.278671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.278888 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.279003 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.278980692 +0000 UTC m=+30.487302685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468373 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.497539 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fv2jh" event={"ID":"a3fffa1c-6d54-432d-9090-da67cd8ca2ee","Type":"ContainerStarted","Data":"50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.497607 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fv2jh" event={"ID":"a3fffa1c-6d54-432d-9090-da67cd8ca2ee","Type":"ContainerStarted","Data":"23077b4603f9d9f7226353bc7284da75ee15fe39826b9d621fa4231e9b413fb4"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.500272 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf" exitCode=0 Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.500352 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.506019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.506057 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.506069 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.525589 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.543434 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.589410 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615968 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.620758 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.652252 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.707739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777144 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.779800 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.883349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.946851 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.000186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002187 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.043037 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.070027 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:46:46.057429451 +0000 UTC Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.076674 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110913 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.122394 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.138451 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.255490 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.269139 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.282146 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.291903 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.301286 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.321933 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.337803 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356295 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.361958 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.377821 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.402439 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.425885 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.442630 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.457103 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458585 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.469703 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.484120 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566113 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.570836 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.570872 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.570881 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.573936 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.574716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.588552 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.602471 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.621898 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.648504 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670709 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.728826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.745148 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.758386 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.771089 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773957 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.791338 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.813020 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.827721 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.841852 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.852164 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.861757 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.875486 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876730 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.932138 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.932178 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.932139 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:59 crc kubenswrapper[4725]: E0120 11:04:59.932281 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:59 crc kubenswrapper[4725]: E0120 11:04:59.932330 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:59 crc kubenswrapper[4725]: E0120 11:04:59.932398 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.070359 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:59:18.355067544 +0000 UTC Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290748 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290764 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393629 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495793 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.589949 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521" exitCode=0 Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.590069 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598873 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.611190 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.628939 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.647740 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.663704 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.680264 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.690946 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.704116 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707363 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.733568 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.751542 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.768023 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.782100 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.796678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.810000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.815303 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.826646 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.841533 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912688 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016661 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016751 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.070804 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:46:57.780160046 +0000 UTC Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221174 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323275 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426201 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529216 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.594963 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb" exitCode=0 Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.595020 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.620238 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631556 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.638327 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.650741 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.662814 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.680322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.716760 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738110 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.810145 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.810375 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:09.810352815 +0000 UTC m=+38.018674788 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.831851 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841199 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.846288 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.855027 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.864496 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.875512 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.892345 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.916425 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.931731 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.931795 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.931863 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.931922 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.931739 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.932000 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.938334 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943929 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.961849 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046669 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.071172 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:50:40.08269041 +0000 UTC Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.113022 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.113065 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.113097 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113197 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113249 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.113236498 +0000 UTC m=+38.321558471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113304 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113335 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113347 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113400 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.113383483 +0000 UTC m=+38.321705456 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113463 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113508 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113524 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113608 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.113587999 +0000 UTC m=+38.321910042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149255 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251667 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.355170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.355409 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.355567 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.355516749 +0000 UTC m=+38.563838732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357728 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.461974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462590 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.607245 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c" exitCode=0 Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.607439 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.616261 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.637540 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669759 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.678862 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.696958 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.720179 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.734175 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.766034 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771264 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.778426 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.790025 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.800562 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.811265 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.822277 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.834868 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.844663 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.860064 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.872530 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873676 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.962345 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977482 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.978843 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.995142 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.012532 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.027765 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.042917 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.058714 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.070391 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.071300 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:11:58.200343424 +0000 UTC Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079908 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.081117 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.093223 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.104560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197870 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.210600 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.231834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.248071 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.265899 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300144 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402574 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.498633 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.514226 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528561 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.542241 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.555827 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.573981 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.585962 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.598443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.610337 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630676 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.632679 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.643981 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765160 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.766255 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.779381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.790859 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.802493 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.814446 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.827108 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.836484 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.851528 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.868988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869050 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.884714 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.900507 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.914750 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.926358 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.931438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.931475 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.931491 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:03 crc kubenswrapper[4725]: E0120 11:05:03.931637 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:03 crc kubenswrapper[4725]: E0120 11:05:03.931688 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:03 crc kubenswrapper[4725]: E0120 11:05:03.931770 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.940652 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.951364 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.961893 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.970951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.970994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.971005 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.971019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.971028 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.973768 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.983985 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.998797 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.012361 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.032511 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.047862 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.066203 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.072685 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:59:50.374556764 +0000 UTC Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073349 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176240 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322508 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424517 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.526465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529691 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.632989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633042 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633157 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.642061 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64" exitCode=0 Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.642179 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.649223 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.649733 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.649807 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.670368 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.696789 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.707627 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739583 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.744248 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.752688 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.770383 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859186 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.867678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.884935 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.897912 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.908942 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.948814 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.962055 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972693 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.982526 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.998650 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.014363 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.025766 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.040946 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.057924 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.073256 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:30:47.178010685 +0000 UTC Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075674 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.079264 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.097560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.116304 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.129543 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.146986 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.160569 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.178000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.181406 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.202321 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.216713 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.275589 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282661 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.291967 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.301185 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.311692 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.338633 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384565 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486612 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.552720 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589381 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.656487 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906" exitCode=0 Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.656670 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.670411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.684162 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.700769 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709719 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.715467 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.739027 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.754811 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.768586 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.790154 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.816025 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819258 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.843895 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.862716 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.879170 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.891339 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.907503 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.937307 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.937540 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:05 crc kubenswrapper[4725]: E0120 11:05:05.937672 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.937742 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:05 crc kubenswrapper[4725]: E0120 11:05:05.937799 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:05 crc kubenswrapper[4725]: E0120 11:05:05.938312 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938461 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.945186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.957291 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041344 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.073528 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:43:50.027797823 +0000 UTC Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.143968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144132 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247367 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350341 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.453976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557996 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607468 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.633753 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640469 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.655705 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.664369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.664538 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.678034 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.678706 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683727 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.693826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.699355 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.702961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.702998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.703008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.703024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.703033 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.736947 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.737163 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.738935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739514 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.742919 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.762395 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.807761 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.821554 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.836607 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841360 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.848053 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.858445 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.873049 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.932916 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944800 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.948054 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.961201 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.978761 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.988783 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.074364 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:51:23.40814568 +0000 UTC Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.149829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150427 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253858 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356530 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459387 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561492 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663986 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766754 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869631 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.918240 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.931220 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:07 crc kubenswrapper[4725]: E0120 11:05:07.931364 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.931739 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:07 crc kubenswrapper[4725]: E0120 11:05:07.931802 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.931848 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:07 crc kubenswrapper[4725]: E0120 11:05:07.931899 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.074979 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:01:04.36182691 +0000 UTC Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075773 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.076084 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178422 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281951 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384995 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.487976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488191 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591379 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.693949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694048 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797267 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900551 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003776 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.076816 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:34:28.401723752 +0000 UTC Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106641 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210432 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.313984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314056 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416919 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520228 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.676422 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/0.log" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.680393 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4" exitCode=1 Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.680451 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.681574 4725 scope.go:117] "RemoveContainer" containerID="017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.701775 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.718955 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.732853 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.751360 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.753497 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r"] Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.754294 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.757683 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.757960 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.770697 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.787989 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.807524 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.821603 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827580 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834102 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.834248 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:25.83423025 +0000 UTC m=+54.042552223 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834313 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834341 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de4324f-3428-4409-92a4-940e5b94fe12-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834365 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834449 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfkbm\" (UniqueName: \"kubernetes.io/projected/6de4324f-3428-4409-92a4-940e5b94fe12-kube-api-access-bfkbm\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.839460 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.851436 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.860738 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.869836 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.884834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.897894 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.911848 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930181 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930928 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.931192 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.931216 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.931305 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.931201 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.931403 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.931468 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de4324f-3428-4409-92a4-940e5b94fe12-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936380 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936439 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfkbm\" (UniqueName: \"kubernetes.io/projected/6de4324f-3428-4409-92a4-940e5b94fe12-kube-api-access-bfkbm\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.937258 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.937336 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.943928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de4324f-3428-4409-92a4-940e5b94fe12-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.944294 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.954756 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfkbm\" (UniqueName: \"kubernetes.io/projected/6de4324f-3428-4409-92a4-940e5b94fe12-kube-api-access-bfkbm\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.955708 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.969670 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.981821 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.992621 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.009019 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.024917 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033570 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.039805 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.052670 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.067706 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.077789 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:01:18.033942352 +0000 UTC Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.085505 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.096229 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.112206 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.128739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137858 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.138385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.138409 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.138431 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138537 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138590 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.138575647 +0000 UTC m=+54.346897620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138921 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138945 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138961 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138994 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.138983949 +0000 UTC m=+54.347305922 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139044 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139054 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139060 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139087 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.139079412 +0000 UTC m=+54.347401385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.155834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.168498 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240081 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240110 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342618 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.440484 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.440648 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.440712 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.440697999 +0000 UTC m=+54.649019972 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.444999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445099 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548692 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712570 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.717911 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/0.log" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.722337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.723450 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.723864 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5lfc4"] Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.724349 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.724414 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.729266 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" event={"ID":"6de4324f-3428-4409-92a4-940e5b94fe12","Type":"ContainerStarted","Data":"cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.729319 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" event={"ID":"6de4324f-3428-4409-92a4-940e5b94fe12","Type":"ContainerStarted","Data":"abb81a1095b54a94c5a5182c1e9a6a73268fc43c55e54d3c0707e2ded1786f3b"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.739172 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.760976 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.773995 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.799592 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.807403 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljhl\" (UniqueName: \"kubernetes.io/projected/a5d55efc-e85a-4a02-a4ce-7355df9fea66-kube-api-access-lljhl\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.808344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.812867 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814735 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.827019 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.840403 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.854080 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.871165 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.885406 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.900499 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.909035 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lljhl\" (UniqueName: \"kubernetes.io/projected/a5d55efc-e85a-4a02-a4ce-7355df9fea66-kube-api-access-lljhl\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.909196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.909343 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.909411 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:11.409390093 +0000 UTC m=+39.617712106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.913156 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917892 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.926210 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.927595 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lljhl\" (UniqueName: \"kubernetes.io/projected/a5d55efc-e85a-4a02-a4ce-7355df9fea66-kube-api-access-lljhl\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.940715 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.955220 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.968503 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.983422 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.996071 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.010232 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020115 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.022076 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.034474 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.049585 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.060731 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.072422 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.078068 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:06:15.114095204 +0000 UTC Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.083464 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.094349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.106668 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.119526 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121966 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.132909 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.147455 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.169826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.186649 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.221017 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328649 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328662 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.413961 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.414182 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.414255 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:12.41423681 +0000 UTC m=+40.622558793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431862 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534601 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.637981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638209 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.736542 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.737531 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/0.log" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.740901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.740952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.740975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.741002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.741023 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.741957 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" exitCode=1 Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.742029 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.742083 4725 scope.go:117] "RemoveContainer" containerID="017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.743827 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.744326 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.749212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" event={"ID":"6de4324f-3428-4409-92a4-940e5b94fe12","Type":"ContainerStarted","Data":"94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.761140 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.786856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.803036 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.821468 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.834322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843733 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.847245 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.860284 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.870534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.882510 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.898181 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.911587 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.925569 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.931742 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.931769 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.931877 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.931890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.932021 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.932175 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.935619 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946561 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.959920 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.970759 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.980998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.996337 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.014626 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.035354 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049642 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.065845 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.079037 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:01:24.124799238 +0000 UTC Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.087842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.114222 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.134088 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153213 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.157342 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.185350 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.209146 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.229127 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.248322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256643 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.263427 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.277849 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.292807 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.310621 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.329863 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359385 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.425915 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.426212 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.426339 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:14.426307845 +0000 UTC m=+42.634629858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462745 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565685 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.756333 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.760337 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.760487 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.778971 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.793568 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.808146 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.832779 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.847341 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.867003 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873904 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.881595 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.896267 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.910929 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.922980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.931369 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.931560 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.943700 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.959018 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.972567 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976852 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.984931 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.997069 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.008534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.019005 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.031132 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.042874 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.053432 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.064269 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079195 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:50:32.574656554 +0000 UTC Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.080937 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.092379 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.105403 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.122962 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.136534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.154940 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.170573 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.182006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.186653 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.203913 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.234194 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.247700 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.260771 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.270918 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283765 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390446 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493203 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595455 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595517 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.699685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700125 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803899 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.907011 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.936937 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:13 crc kubenswrapper[4725]: E0120 11:05:13.937650 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.936990 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:13 crc kubenswrapper[4725]: E0120 11:05:13.937882 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.936926 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:13 crc kubenswrapper[4725]: E0120 11:05:13.938328 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.079735 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:20:43.794980762 +0000 UTC Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113918 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113938 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216555 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318946 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422926 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.448249 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:14 crc kubenswrapper[4725]: E0120 11:05:14.448498 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:14 crc kubenswrapper[4725]: E0120 11:05:14.448596 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:18.448576264 +0000 UTC m=+46.656898247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526758 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629784 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.733032 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836506 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.934666 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:14 crc kubenswrapper[4725]: E0120 11:05:14.934936 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.940744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941440 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043870 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.080917 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:52:22.249678218 +0000 UTC Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147648 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252344 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355946 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460467 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460511 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.562978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563042 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876401 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.932485 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.932485 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.932519 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:15 crc kubenswrapper[4725]: E0120 11:05:15.932812 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:15 crc kubenswrapper[4725]: E0120 11:05:15.932903 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:15 crc kubenswrapper[4725]: E0120 11:05:15.933058 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978735 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978777 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.081018 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:30:47.537260507 +0000 UTC Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082374 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186486 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289346 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392785 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392941 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496785 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598976 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.701502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.701803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.701896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.702001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.702108 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805271 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.932390 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:16 crc kubenswrapper[4725]: E0120 11:05:16.932598 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014327 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.081755 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:50:51.699149979 +0000 UTC Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084210 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.103294 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108614 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.132492 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137759 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.161442 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.166953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.183758 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187844 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.201516 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.201695 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203412 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306698 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409912 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515750 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619159 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738433 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842587 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.931878 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.932142 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.932133 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.932147 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.932290 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.932427 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946199 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.048986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049136 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.082792 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:09:41.561282613 +0000 UTC Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152288 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357239 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460150 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.544674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:18 crc kubenswrapper[4725]: E0120 11:05:18.544885 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:18 crc kubenswrapper[4725]: E0120 11:05:18.544957 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.544941812 +0000 UTC m=+54.753263785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562251 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562268 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665163 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767905 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870869 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.932218 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:18 crc kubenswrapper[4725]: E0120 11:05:18.932521 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974290 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077297 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.083362 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:15:47.220643713 +0000 UTC Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180603 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.282922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.282998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.283012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.283040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.283052 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386303 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491600 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594251 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697761 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.800981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801233 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.931968 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.932014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.932007 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:19 crc kubenswrapper[4725]: E0120 11:05:19.932332 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:19 crc kubenswrapper[4725]: E0120 11:05:19.932437 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:19 crc kubenswrapper[4725]: E0120 11:05:19.932683 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007372 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.084149 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:04:58.719384255 +0000 UTC Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111935 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.215965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216306 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.319893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423769 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527416 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630928 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735565 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838538 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.931452 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:20 crc kubenswrapper[4725]: E0120 11:05:20.931677 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.941006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044460 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.084934 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:57:57.000148602 +0000 UTC Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148555 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252230 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.562847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666429 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.874020 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.931625 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.931646 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.931846 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:21 crc kubenswrapper[4725]: E0120 11:05:21.932040 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:21 crc kubenswrapper[4725]: E0120 11:05:21.932223 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:21 crc kubenswrapper[4725]: E0120 11:05:21.932363 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081277 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.085967 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:01:48.667504941 +0000 UTC Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184166 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287138 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493532 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595907 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.698568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699732 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802810 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.905558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.906126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.906331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.906530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.925969 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.931886 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:22 crc kubenswrapper[4725]: E0120 11:05:22.932195 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.950882 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.973167 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.994168 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.017332 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028196 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.045546 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.061204 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.084712 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.086746 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:38:21.552269967 +0000 UTC Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.103043 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.118426 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130216 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.140386 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.153482 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.167072 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.185372 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.203339 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.221684 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232489 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.237656 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.251856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335546 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437819 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540914 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644444 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849787 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.931607 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.931825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.931998 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:23 crc kubenswrapper[4725]: E0120 11:05:23.931985 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:23 crc kubenswrapper[4725]: E0120 11:05:23.932202 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:23 crc kubenswrapper[4725]: E0120 11:05:23.932334 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952792 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.087435 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:14:11.232490664 +0000 UTC Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.245827 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262236 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.264297 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.271156 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.294972 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.312397 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.327144 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.362969 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365934 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.382589 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.396663 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.413777 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.440857 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.457869 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.508411 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.529617 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.552983 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.568197 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580276 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.589177 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.611994 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.634669 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683906 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786348 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889689 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.931438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:24 crc kubenswrapper[4725]: E0120 11:05:24.931592 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992984 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.088482 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:04:33.537960525 +0000 UTC Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096964 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200376 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303930 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613208 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716132 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.818906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.818971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.818988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.819009 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.819026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.859973 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.860309 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:57.860273377 +0000 UTC m=+86.068595390 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.931536 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.931570 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.931551 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.931657 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.931803 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.931894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025595 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.088699 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:29:36.739793704 +0000 UTC Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.162929 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.163007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.163048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163226 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163336 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.163313075 +0000 UTC m=+86.371635078 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163366 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163382 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163443 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163469 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163408 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163518 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163568 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.163535351 +0000 UTC m=+86.371857404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163610 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.163592443 +0000 UTC m=+86.371914606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.238891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.238950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.238974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.239007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.239031 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.342953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343143 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.447009 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.465968 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.466196 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.466303 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.466275561 +0000 UTC m=+86.674597564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550515 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.566760 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.567002 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.567177 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:42.567135256 +0000 UTC m=+70.775457329 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653652 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756580 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.860938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861191 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.931454 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.931612 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.963700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964481 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067606 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.089308 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:44:42.589106877 +0000 UTC Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170797 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.250853 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257312 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.278907 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284132 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.301954 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.324373 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329758 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.352795 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.353041 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354804 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458544 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561409 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.663919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.663970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.663982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.664019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.664031 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766588 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868891 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.931752 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.931859 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.931779 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.931908 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.932199 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.932169 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.933012 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971831 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.074710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075074 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.089543 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:00:43.886607422 +0000 UTC Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177653 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281450 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383993 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486714 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691936 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795301 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.855229 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.857872 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.858485 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.877136 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.897962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.902097 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.915675 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.931979 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:28 crc kubenswrapper[4725]: E0120 11:05:28.932128 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.936531 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.948939 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.963613 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.976344 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.988186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012548 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.020680 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.033287 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.049411 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.061213 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.076436 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.090339 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:26:47.049585 +0000 UTC Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114662 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.185307 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.197893 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.214134 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216742 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.226649 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.239213 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318727 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421398 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524938 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628372 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834568 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.862069 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.862648 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.864808 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" exitCode=1 Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.864843 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.864876 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.865685 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.865859 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.989828 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.989994 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.990031 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.990212 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.990330 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.990459 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.991953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.991982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.991990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.992002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.992013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.008074 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.032633 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.050420 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.070161 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.088194 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.090713 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:26:39.979201558 +0000 UTC Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.102774 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.118006 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.129560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.141010 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.162043 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.176824 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.189637 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196723 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.201595 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.213407 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.224944 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.239243 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.255336 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.269227 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299732 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402893 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609241 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712160 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.869566 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.874325 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:30 crc kubenswrapper[4725]: E0120 11:05:30.874593 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.891191 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.906492 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917682 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.922145 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.931945 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:30 crc kubenswrapper[4725]: E0120 11:05:30.932061 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.940376 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.955233 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.971326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.985101 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.999034 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.020381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.021978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022044 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.035440 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.046796 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.061041 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.073931 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.087179 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.091227 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:33:43.923991293 +0000 UTC Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.104301 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125162 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.126461 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.143154 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.163867 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329399 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431944 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534597 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.740901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.740984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.741001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.741032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.741051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844906 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.932356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.932368 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:31 crc kubenswrapper[4725]: E0120 11:05:31.932558 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:31 crc kubenswrapper[4725]: E0120 11:05:31.932734 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.932383 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:31 crc kubenswrapper[4725]: E0120 11:05:31.933411 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.089000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.092022 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:41:23.139576104 +0000 UTC Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191154 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191196 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293604 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396451 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396475 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499522 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602564 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705094 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705131 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807567 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909405 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.931475 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:32 crc kubenswrapper[4725]: E0120 11:05:32.931633 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.946127 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.966678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.984457 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.000115 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011062 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.023693 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.043307 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.068326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.083038 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.092583 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:15:01.687985455 +0000 UTC Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.097280 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.112297 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113906 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.128029 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.146140 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.191530 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.207419 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.225364 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.238217 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.249637 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.260960 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.319957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320057 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423700 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.526916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527464 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.631056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.631693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.631894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.632125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.632404 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735833 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839649 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.931599 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.931706 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.931814 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:33 crc kubenswrapper[4725]: E0120 11:05:33.931829 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:33 crc kubenswrapper[4725]: E0120 11:05:33.931946 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:33 crc kubenswrapper[4725]: E0120 11:05:33.932101 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943161 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046289 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.093095 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:44:57.069538618 +0000 UTC Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.148910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252437 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358408 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460913 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.690957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691069 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795563 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898534 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898624 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.931824 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:34 crc kubenswrapper[4725]: E0120 11:05:34.932144 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002782 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.093997 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:17:05.167620867 +0000 UTC Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106530 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209267 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414763 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517366 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620490 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722724 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928531 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.931663 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.931747 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:35 crc kubenswrapper[4725]: E0120 11:05:35.931772 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.931663 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:35 crc kubenswrapper[4725]: E0120 11:05:35.931981 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:35 crc kubenswrapper[4725]: E0120 11:05:35.932113 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031690 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.095334 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:16:01.78606173 +0000 UTC Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138436 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.345015 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447785 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447892 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550260 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653353 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756637 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.931964 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:36 crc kubenswrapper[4725]: E0120 11:05:36.932231 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961428 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.063960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064043 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.095610 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:48:21.173392198 +0000 UTC Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166186 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268825 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371212 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473219 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496905 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.519542 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.540359 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545913 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.562813 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566826 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.582788 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586998 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.601734 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.601908 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603453 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706688 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809820 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.931298 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.931314 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.931339 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.931428 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.931534 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.931707 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.096568 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:58:35.602264125 +0000 UTC Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.117942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.117981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.117993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.118007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.118016 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220859 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.322895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323347 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425109 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631108 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733312 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835646 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.931363 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:38 crc kubenswrapper[4725]: E0120 11:05:38.931554 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.040541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.040900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.041043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.041192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.041277 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.096994 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:18:22.92627893 +0000 UTC Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144502 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.246945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.246981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.246991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.247004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.247012 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349640 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557217 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659925 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762414 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865677 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.932117 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.932231 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:39 crc kubenswrapper[4725]: E0120 11:05:39.932278 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.932307 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:39 crc kubenswrapper[4725]: E0120 11:05:39.932444 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:39 crc kubenswrapper[4725]: E0120 11:05:39.932551 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.968989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969101 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.070889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071369 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.097310 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:25:19.100880966 +0000 UTC Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174342 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.276770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277403 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380336 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482455 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791269 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893748 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.931573 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:40 crc kubenswrapper[4725]: E0120 11:05:40.931825 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.996936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997044 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.097655 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:07:08.980036686 +0000 UTC Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202769 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305478 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408929 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633346 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633380 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736525 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.838986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839112 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.931584 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.931668 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.931598 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:41 crc kubenswrapper[4725]: E0120 11:05:41.931728 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:41 crc kubenswrapper[4725]: E0120 11:05:41.931807 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:41 crc kubenswrapper[4725]: E0120 11:05:41.932294 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941968 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044661 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.098776 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:43:45.175063546 +0000 UTC Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148495 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253451 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356241 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459072 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459101 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561378 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.573821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:42 crc kubenswrapper[4725]: E0120 11:05:42.574115 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:42 crc kubenswrapper[4725]: E0120 11:05:42.574218 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:06:14.574193616 +0000 UTC m=+102.782515589 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766742 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766844 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869141 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.932200 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:42 crc kubenswrapper[4725]: E0120 11:05:42.932519 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.967443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:42Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973797 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.989285 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:42Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.019020 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.033547 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.048958 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.064097 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.077006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.084129 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.099971 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:42:21.123324199 +0000 UTC Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.100840 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.119683 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.137272 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.157604 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180979 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.183583 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.201424 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.217048 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.232675 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.250351 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.266758 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.280960 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.283943 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.283989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.284007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.284031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.284048 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.386825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387496 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.490033 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699511 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802440 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802452 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802482 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908442 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.931563 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.931568 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.931614 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:43 crc kubenswrapper[4725]: E0120 11:05:43.931703 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:43 crc kubenswrapper[4725]: E0120 11:05:43.931860 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:43 crc kubenswrapper[4725]: E0120 11:05:43.932009 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011186 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.100962 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:22:34.455650511 +0000 UTC Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216313 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319383 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524695 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524742 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628180 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833680 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.932071 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:44 crc kubenswrapper[4725]: E0120 11:05:44.932218 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936260 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038336 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.102095 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:20:26.49045766 +0000 UTC Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141522 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347310 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449720 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.552681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553388 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656359 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861876 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.931851 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.932430 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.931875 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.932509 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.932638 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.932836 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.931851 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.933343 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066773 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066796 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.102216 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:44:16.345655341 +0000 UTC Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169875 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.272996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375638 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478424 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.580920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.580986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.581007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.581033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.581053 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683771 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786738 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889566 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.931434 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:46 crc kubenswrapper[4725]: E0120 11:05:46.931681 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.960672 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/0.log" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.960735 4725 generic.go:334] "Generic (PLEG): container finished" podID="627f7c97-4173-413f-a90e-e2c5e058c53b" containerID="60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad" exitCode=1 Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.960770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerDied","Data":"60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.961231 4725 scope.go:117] "RemoveContainer" containerID="60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.981000 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:46Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992138 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.010494 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.027936 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.045600 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.067421 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.083144 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.094994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095101 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095961 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.102859 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:32:41.36578562 +0000 UTC Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.115300 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.140875 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.158744 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.184311 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197973 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.200666 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.214385 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.229356 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.242616 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.255981 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.273263 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.286891 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509873 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612071 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714658 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883336 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.905350 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909541 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.925859 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930230 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.941697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.941751 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.941860 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.942091 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.941805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.942415 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.946069 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950306 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.966508 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/0.log" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.966567 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6"} Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.966407 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971340 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.982442 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.986561 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.986679 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988557 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.004749 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.022986 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.044158 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.058203 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.069629 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.079809 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.091219 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.096484 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.104032 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:45:58.347544682 +0000 UTC Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.109227 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.122387 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.133918 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.150382 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.161888 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.174289 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.187664 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.199970 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.215299 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.228401 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406443 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509799 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612463 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612484 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719361 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823629 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926677 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.931903 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:48 crc kubenswrapper[4725]: E0120 11:05:48.932070 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028872 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.105136 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:06:11.693582473 +0000 UTC Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132445 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235465 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.338752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339457 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442917 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650431 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.753658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754458 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.931765 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.931825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:49 crc kubenswrapper[4725]: E0120 11:05:49.931896 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.931765 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:49 crc kubenswrapper[4725]: E0120 11:05:49.932172 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:49 crc kubenswrapper[4725]: E0120 11:05:49.932253 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.980970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981045 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.106191 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:41:41.894623393 +0000 UTC Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188347 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395268 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497627 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600686 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703313 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.909971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910050 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.931893 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:50 crc kubenswrapper[4725]: E0120 11:05:50.932119 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012322 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.107229 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:33:46.671066248 +0000 UTC Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114878 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217778 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320936 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424226 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.528938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.528978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.528989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.529011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.529021 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.632731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633457 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839989 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.931846 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.931893 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.931901 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:51 crc kubenswrapper[4725]: E0120 11:05:51.932028 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:51 crc kubenswrapper[4725]: E0120 11:05:51.932228 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:51 crc kubenswrapper[4725]: E0120 11:05:51.932383 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.107667 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:40:14.314038437 +0000 UTC Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150875 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254192 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356635 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459417 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.696932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.696997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.697008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.697024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.697034 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800586 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.932268 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:52 crc kubenswrapper[4725]: E0120 11:05:52.932411 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.947061 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.961681 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.972206 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.984503 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.002400 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006434 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.015776 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.030933 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.049013 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.075992 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.091813 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.107891 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:22:33.647465803 +0000 UTC Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109073 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109114 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.113878 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.130400 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.148186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.164980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.180463 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.195993 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212416 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212392 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.229825 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314735 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418648 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624859 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726753 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.931219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:53 crc kubenswrapper[4725]: E0120 11:05:53.931366 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.931504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.931504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:53 crc kubenswrapper[4725]: E0120 11:05:53.931671 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:53 crc kubenswrapper[4725]: E0120 11:05:53.931937 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.933030 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.035955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036046 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.108727 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:18:09.82347916 +0000 UTC Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244383 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450453 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450461 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554669 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657511 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657633 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760548 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863506 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.932161 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:54 crc kubenswrapper[4725]: E0120 11:05:54.932427 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966700 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075446 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075490 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.109169 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:22:07.189068616 +0000 UTC Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.177931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178516 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.280930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.280976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.280993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.281010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.281023 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383580 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383780 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487455 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487467 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589666 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692318 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795328 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898767 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.931276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.931339 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.931276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:55 crc kubenswrapper[4725]: E0120 11:05:55.931434 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:55 crc kubenswrapper[4725]: E0120 11:05:55.931522 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:55 crc kubenswrapper[4725]: E0120 11:05:55.931810 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035795 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.109468 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:49:52.033780129 +0000 UTC Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241516 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345373 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449398 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552880 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655890 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758454 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758470 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860734 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.932368 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:56 crc kubenswrapper[4725]: E0120 11:05:56.932607 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963811 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066235 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.110478 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:34:39.196391356 +0000 UTC Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271660 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374917 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.478000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580441 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682912 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784999 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.868763 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.869037 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:01.86900019 +0000 UTC m=+150.077322173 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.887921 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.887990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.888006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.888029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.888043 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932129 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932144 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932434 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.932603 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.932803 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.932871 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932901 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992682 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.095805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096440 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.111008 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:57:21.475067378 +0000 UTC Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.170215 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.172364 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.172433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.172471 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172568 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172599 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172615 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172636 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172687 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.172665972 +0000 UTC m=+150.380987965 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172713 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.172702283 +0000 UTC m=+150.381024266 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172715 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172760 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172777 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172855 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.172830157 +0000 UTC m=+150.381152140 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175619 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.189228 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.193942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.193976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.193990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.194007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.194019 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.209781 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.213959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214499 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.233554 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238746 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.258544 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.258887 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260752 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.364816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365939 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.468613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.468890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.468964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.469045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.469130 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.475115 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.475284 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.475343 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.475330609 +0000 UTC m=+150.683652572 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571991 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.675898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.778973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881839 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.932218 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.932460 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985607 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.043961 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.046057 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.046503 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.063791 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.077655 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088194 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.091798 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.106826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.111717 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:43:51.070777852 +0000 UTC Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.129311 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.145004 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.167184 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.184631 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.198709 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.210945 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.223029 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.233684 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.246196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.261550 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.275482 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293197 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.294270 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.313521 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.324255 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.531947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.531990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.532003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.532019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.532031 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634270 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737287 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.932000 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:59 crc kubenswrapper[4725]: E0120 11:05:59.933413 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.932537 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:59 crc kubenswrapper[4725]: E0120 11:05:59.933664 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.932486 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:59 crc kubenswrapper[4725]: E0120 11:05:59.933877 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.974985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.975503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.975920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.976253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.976508 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.990379 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079069 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.112558 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:06:44.64007413 +0000 UTC Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182260 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285346 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285374 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.387998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388197 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492365 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596719 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803862 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906196 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.932303 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:00 crc kubenswrapper[4725]: E0120 11:06:00.932425 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008863 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.054264 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.056533 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.059995 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" exitCode=1 Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.060034 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.060102 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.061338 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.064204 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.079212 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.093954 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.108069 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111949 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.113473 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:50:28.334649357 +0000 UTC Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.123188 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.135865 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.151138 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.167573 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.181349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.194785 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.206452 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215686 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.220643 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.234534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.245246 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.256219 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.274408 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.293003 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.312059 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317918 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317930 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.326504 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.348985 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420502 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523843 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.626929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.626989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.627015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.627045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.627056 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730595 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833635 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.931637 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.931682 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.931707 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.931813 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.931967 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.932165 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936864 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936925 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.039972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040176 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.065111 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.113859 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:31:13.234204942 +0000 UTC Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143233 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.245865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.245948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.245973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.246003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.246026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350231 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556354 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660654 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763871 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866795 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.932473 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:02 crc kubenswrapper[4725]: E0120 11:06:02.932678 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.950875 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.967120 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969149 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.985720 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.001770 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.018443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.039476 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.053915 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.065216 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.088093 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.106647 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.114131 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:44:23.628409275 +0000 UTC Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.136354 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.160223 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.173174 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.175902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.175968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.175990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.176034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.176054 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.191248 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.206538 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.221756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.234543 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.251166 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.272408 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.278901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.278980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.279003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.279034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.279056 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484988 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587259 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587363 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689750 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.791933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792068 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894833 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.931843 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.931889 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.931916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:03 crc kubenswrapper[4725]: E0120 11:06:03.932000 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:03 crc kubenswrapper[4725]: E0120 11:06:03.932113 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:03 crc kubenswrapper[4725]: E0120 11:06:03.932303 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997371 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099943 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.114734 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:37:15.02669279 +0000 UTC Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202251 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202333 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305331 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408395 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512709 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616175 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720504 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823436 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927177 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.931585 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:04 crc kubenswrapper[4725]: E0120 11:06:04.931724 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030285 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.115805 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:42:08.614187768 +0000 UTC Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.133057 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339493 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442281 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544879 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.647919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.647977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.647990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.648021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.648035 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.749949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.749989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.749997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.750012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.750023 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853122 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.931843 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.931887 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.931938 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:05 crc kubenswrapper[4725]: E0120 11:06:05.932067 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:05 crc kubenswrapper[4725]: E0120 11:06:05.932219 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:05 crc kubenswrapper[4725]: E0120 11:06:05.932333 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956100 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059223 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.116602 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:32:21.820955413 +0000 UTC Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162153 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368286 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470742 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470838 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573895 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677579 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780772 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884786 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.931690 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:06 crc kubenswrapper[4725]: E0120 11:06:06.931963 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987743 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090843 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.117667 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:34:03.111493072 +0000 UTC Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193610 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296528 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400384 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503645 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606694 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.708875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709151 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812566 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.931509 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.931596 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.931651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:07 crc kubenswrapper[4725]: E0120 11:06:07.931739 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:07 crc kubenswrapper[4725]: E0120 11:06:07.931869 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:07 crc kubenswrapper[4725]: E0120 11:06:07.932051 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017515 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.117809 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:38:40.380298124 +0000 UTC Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223009 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223178 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.327988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328043 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431381 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534406 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546688 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.564745 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569117 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.585609 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593610 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.645177 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649771 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.680461 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684992 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.697904 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.698027 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699550 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802154 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802168 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905375 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.932111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.932283 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008463 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008569 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110394 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.118526 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:41:11.746731945 +0000 UTC Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314700 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418238 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521605 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624666 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727307 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830733 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.931556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.931624 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.931559 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:09 crc kubenswrapper[4725]: E0120 11:06:09.931783 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:09 crc kubenswrapper[4725]: E0120 11:06:09.931918 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:09 crc kubenswrapper[4725]: E0120 11:06:09.932007 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933749 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.118888 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:03:46.249751403 +0000 UTC Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139175 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242596 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344853 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447553 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549988 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652498 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755204 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857293 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.931480 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:10 crc kubenswrapper[4725]: E0120 11:06:10.931669 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960921 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960959 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.119168 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:02:03.054434706 +0000 UTC Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165817 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268393 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372385 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475486 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578266 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784209 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.932495 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.932802 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.932934 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.933064 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.933342 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.933519 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.933807 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.934019 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.946577 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.967961 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.990013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.990487 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.004549 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.018998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.034157 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.047139 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.058289 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.072207 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092790 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.096513 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.110959 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.119800 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:04:53.132519994 +0000 UTC Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.140861 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.157375 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.174275 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.190196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195683 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.205412 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.217978 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.232623 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.250679 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298860 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401592 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504951 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608286 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.921806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922433 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.931430 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:12 crc kubenswrapper[4725]: E0120 11:06:12.931770 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.946815 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.967218 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.982545 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.000749 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.017834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.033350 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.047564 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.058323 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.072431 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.085241 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.096280 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.106166 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.118849 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.120990 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:35:51.131156284 +0000 UTC Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.129441 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.139988 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.151662 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.173766 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.187920 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.212553 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229684 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332188 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433882 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536645 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742523 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844742 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.932016 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:13 crc kubenswrapper[4725]: E0120 11:06:13.932486 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.932340 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:13 crc kubenswrapper[4725]: E0120 11:06:13.932718 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.932301 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:13 crc kubenswrapper[4725]: E0120 11:06:13.932898 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948423 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.051456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.051836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.051958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.052158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.052288 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.121901 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:08:42.10928996 +0000 UTC Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258864 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.361976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362047 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464604 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567406 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.641779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:14 crc kubenswrapper[4725]: E0120 11:06:14.641965 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:06:14 crc kubenswrapper[4725]: E0120 11:06:14.642020 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:18.642007864 +0000 UTC m=+166.850329827 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772406 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875082 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.931276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:14 crc kubenswrapper[4725]: E0120 11:06:14.931558 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978465 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080511 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.122770 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:08:11.368230692 +0000 UTC Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182726 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.492699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.492944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.492985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.493015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.493040 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595826 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.698910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699527 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803140 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905446 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905533 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.931995 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.932074 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:15 crc kubenswrapper[4725]: E0120 11:06:15.932208 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:15 crc kubenswrapper[4725]: E0120 11:06:15.932326 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.932536 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:15 crc kubenswrapper[4725]: E0120 11:06:15.932708 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.008987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009172 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111755 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.123966 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:13:07.496403791 +0000 UTC Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215331 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566868 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669300 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.931984 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:16 crc kubenswrapper[4725]: E0120 11:06:16.932189 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978921 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081742 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081768 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.124872 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:14:06.466007128 +0000 UTC Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287373 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390980 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493591 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597233 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.699940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700105 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905229 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.931546 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:17 crc kubenswrapper[4725]: E0120 11:06:17.931668 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.931731 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.931549 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:17 crc kubenswrapper[4725]: E0120 11:06:17.932053 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:17 crc kubenswrapper[4725]: E0120 11:06:17.932125 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007450 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.008004 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.125001 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:12:49.780898846 +0000 UTC Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213509 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316451 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316467 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316478 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419267 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522305 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625576 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728044 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728150 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732223 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.747456 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.751972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752041 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.771193 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.798745 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805440 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805494 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.822875 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828289 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.845476 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.845695 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.932059 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.932227 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949724 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052374 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.126820 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:30:12.848956282 +0000 UTC Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154451 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.256974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359887 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463441 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566556 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669242 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772444 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875652 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.931958 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.932028 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.932047 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:19 crc kubenswrapper[4725]: E0120 11:06:19.932236 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:19 crc kubenswrapper[4725]: E0120 11:06:19.932387 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:19 crc kubenswrapper[4725]: E0120 11:06:19.932591 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978954 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081843 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.126970 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:26:10.26363544 +0000 UTC Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287679 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389562 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492349 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595569 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.697966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698116 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.800462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.800814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.800934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.801049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.801189 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.903952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.903992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.904001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.904015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.904026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.931878 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:20 crc kubenswrapper[4725]: E0120 11:06:20.932172 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007305 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110894 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.127597 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:48:06.219061337 +0000 UTC Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314794 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417912 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.521000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624425 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727719 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830417 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830499 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.931196 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.931244 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.931281 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:21 crc kubenswrapper[4725]: E0120 11:06:21.931402 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:21 crc kubenswrapper[4725]: E0120 11:06:21.931569 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:21 crc kubenswrapper[4725]: E0120 11:06:21.931769 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932938 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.128494 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:56:31.878540487 +0000 UTC Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140299 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242784 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345192 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447365 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.550827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551551 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654072 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756703 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859982 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.932134 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:22 crc kubenswrapper[4725]: E0120 11:06:22.932670 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.954932 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962696 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.970817 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.985511 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.001779 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.021696 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.037639 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.050238 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.062643 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064393 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.076647 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.089490 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.100356 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.109982 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.123184 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.129183 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:32:02.686930509 +0000 UTC Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.139369 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.150049 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.162995 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166693 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.187043 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.200384 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.220772 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.370929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.370967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.370984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.371002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.371013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.473841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474961 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577970 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681348 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.783993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784059 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887135 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.931527 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.931534 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.931620 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932009 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932220 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932265 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.932356 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932741 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989707 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.129384 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:59:44.190824728 +0000 UTC Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195194 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303468 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405682 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508603 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611677 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714244 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817399 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.932394 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:24 crc kubenswrapper[4725]: E0120 11:06:24.932533 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021668 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.130568 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:39:42.676371418 +0000 UTC Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227875 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.330906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331902 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435668 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539788 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642309 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.744837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.848704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849298 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.932027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:25 crc kubenswrapper[4725]: E0120 11:06:25.932251 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.932336 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:25 crc kubenswrapper[4725]: E0120 11:06:25.932547 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.932568 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:25 crc kubenswrapper[4725]: E0120 11:06:25.932994 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952766 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.055963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056044 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056054 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.131329 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:31:26.771030937 +0000 UTC Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.164011 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267376 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370878 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.472979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473053 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576323 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679216 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.781592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.781935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.782039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.782152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.782226 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884871 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.931861 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:26 crc kubenswrapper[4725]: E0120 11:06:26.932307 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987418 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089903 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.131976 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:47:18.757825445 +0000 UTC Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191989 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294636 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.397795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398119 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500902 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604310 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706894 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.809797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810421 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913240 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.931662 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.931755 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:27 crc kubenswrapper[4725]: E0120 11:06:27.931825 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.931864 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:27 crc kubenswrapper[4725]: E0120 11:06:27.931890 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:27 crc kubenswrapper[4725]: E0120 11:06:27.932033 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.132484 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:08:33.273115752 +0000 UTC Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.222456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.222766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.222896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.223413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.223544 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326643 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429829 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.532854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.532959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.532986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.533016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.533038 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740502 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843454 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.931653 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:28 crc kubenswrapper[4725]: E0120 11:06:28.931869 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945703 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962780 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.050397 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx"] Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.050919 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.056753 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.056764 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.058475 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.058603 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.091749 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=65.091727462 podStartE2EDuration="1m5.091727462s" podCreationTimestamp="2026-01-20 11:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.09166364 +0000 UTC m=+117.299985633" watchObservedRunningTime="2026-01-20 11:06:29.091727462 +0000 UTC m=+117.300049435" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.091956 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=96.091950899 podStartE2EDuration="1m36.091950899s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.076644888 +0000 UTC m=+117.284966861" watchObservedRunningTime="2026-01-20 11:06:29.091950899 +0000 UTC m=+117.300272862" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.133292 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:35:52.987331142 +0000 UTC Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.133376 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.139345 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podStartSLOduration=95.139329574 podStartE2EDuration="1m35.139329574s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.139033464 +0000 UTC m=+117.347355437" watchObservedRunningTime="2026-01-20 11:06:29.139329574 +0000 UTC m=+117.347651547" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.145106 4725 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152521 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35389854-308b-4f28-9ac3-a41e20853c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152572 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35389854-308b-4f28-9ac3-a41e20853c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152696 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152728 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35389854-308b-4f28-9ac3-a41e20853c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.174373 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vchwb" podStartSLOduration=95.174348548 podStartE2EDuration="1m35.174348548s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.174116951 +0000 UTC m=+117.382438944" watchObservedRunningTime="2026-01-20 11:06:29.174348548 +0000 UTC m=+117.382670521" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.174584 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z7f69" podStartSLOduration=95.174578666 podStartE2EDuration="1m35.174578666s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.157307155 +0000 UTC m=+117.365629138" watchObservedRunningTime="2026-01-20 11:06:29.174578666 +0000 UTC m=+117.382900639" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.185956 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.185937544 podStartE2EDuration="30.185937544s" podCreationTimestamp="2026-01-20 11:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.185347586 +0000 UTC m=+117.393669559" watchObservedRunningTime="2026-01-20 11:06:29.185937544 +0000 UTC m=+117.394259517" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253721 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35389854-308b-4f28-9ac3-a41e20853c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253801 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253888 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35389854-308b-4f28-9ac3-a41e20853c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35389854-308b-4f28-9ac3-a41e20853c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253930 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.254024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.254866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35389854-308b-4f28-9ac3-a41e20853c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.263316 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-c9dck" podStartSLOduration=96.263296029 podStartE2EDuration="1m36.263296029s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.228012726 +0000 UTC m=+117.436334699" watchObservedRunningTime="2026-01-20 11:06:29.263296029 +0000 UTC m=+117.471618002" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.267574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35389854-308b-4f28-9ac3-a41e20853c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.272618 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35389854-308b-4f28-9ac3-a41e20853c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.279611 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fv2jh" podStartSLOduration=96.279590511 podStartE2EDuration="1m36.279590511s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.263530537 +0000 UTC m=+117.471852510" watchObservedRunningTime="2026-01-20 11:06:29.279590511 +0000 UTC m=+117.487912484" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.294163 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" podStartSLOduration=93.294145947 podStartE2EDuration="1m33.294145947s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.293438556 +0000 UTC m=+117.501760539" watchObservedRunningTime="2026-01-20 11:06:29.294145947 +0000 UTC m=+117.502467920" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.348183 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.348165836 podStartE2EDuration="1m36.348165836s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.347191476 +0000 UTC m=+117.555513449" watchObservedRunningTime="2026-01-20 11:06:29.348165836 +0000 UTC m=+117.556487809" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.366715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.382862 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=96.382840881 podStartE2EDuration="1m36.382840881s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.382505531 +0000 UTC m=+117.590827524" watchObservedRunningTime="2026-01-20 11:06:29.382840881 +0000 UTC m=+117.591162854" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.931912 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.931928 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.932456 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:29 crc kubenswrapper[4725]: E0120 11:06:29.932577 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:29 crc kubenswrapper[4725]: E0120 11:06:29.932657 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:29 crc kubenswrapper[4725]: E0120 11:06:29.932727 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.167207 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" event={"ID":"35389854-308b-4f28-9ac3-a41e20853c06","Type":"ContainerStarted","Data":"09ae986a64fe961c1b762568a3457e61a43a64c207922c968b75267161d978da"} Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.167267 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" event={"ID":"35389854-308b-4f28-9ac3-a41e20853c06","Type":"ContainerStarted","Data":"702b932997e2135b2cad23835aca9243e0293ab5e7aa7c6aaa4d5a7bdfcb0d15"} Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.186741 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" podStartSLOduration=96.186718124 podStartE2EDuration="1m36.186718124s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:30.186684524 +0000 UTC m=+118.395006497" watchObservedRunningTime="2026-01-20 11:06:30.186718124 +0000 UTC m=+118.395040097" Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.931644 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:30 crc kubenswrapper[4725]: E0120 11:06:30.932168 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:31 crc kubenswrapper[4725]: I0120 11:06:31.931317 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:31 crc kubenswrapper[4725]: I0120 11:06:31.931459 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:31 crc kubenswrapper[4725]: E0120 11:06:31.931552 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:31 crc kubenswrapper[4725]: I0120 11:06:31.931352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:31 crc kubenswrapper[4725]: E0120 11:06:31.931693 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:31 crc kubenswrapper[4725]: E0120 11:06:31.931875 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:32 crc kubenswrapper[4725]: I0120 11:06:32.931593 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:32 crc kubenswrapper[4725]: E0120 11:06:32.932958 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:32 crc kubenswrapper[4725]: E0120 11:06:32.952213 4725 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 20 11:06:33 crc kubenswrapper[4725]: E0120 11:06:33.010850 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:33.931786 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:33.931918 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:33.932024 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:33.931810 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:33.932108 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:33.932278 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:34.931668 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:34.931862 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856144 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856761 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/0.log" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856806 4725 generic.go:334] "Generic (PLEG): container finished" podID="627f7c97-4173-413f-a90e-e2c5e058c53b" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" exitCode=1 Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856838 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerDied","Data":"31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6"} Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856872 4725 scope.go:117] "RemoveContainer" containerID="60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.857355 4725 scope.go:117] "RemoveContainer" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.857547 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-vchwb_openshift-multus(627f7c97-4173-413f-a90e-e2c5e058c53b)\"" pod="openshift-multus/multus-vchwb" podUID="627f7c97-4173-413f-a90e-e2c5e058c53b" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.932030 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.932068 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.932180 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.932284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.932392 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.932465 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:36 crc kubenswrapper[4725]: I0120 11:06:36.865025 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:06:36 crc kubenswrapper[4725]: I0120 11:06:36.931715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:36 crc kubenswrapper[4725]: E0120 11:06:36.931867 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:36 crc kubenswrapper[4725]: I0120 11:06:36.933038 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:36 crc kubenswrapper[4725]: E0120 11:06:36.933359 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:37 crc kubenswrapper[4725]: I0120 11:06:37.931372 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:37 crc kubenswrapper[4725]: I0120 11:06:37.931460 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:37 crc kubenswrapper[4725]: E0120 11:06:37.931508 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:37 crc kubenswrapper[4725]: E0120 11:06:37.931604 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:37 crc kubenswrapper[4725]: I0120 11:06:37.931396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:37 crc kubenswrapper[4725]: E0120 11:06:37.931681 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:38 crc kubenswrapper[4725]: E0120 11:06:38.011916 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:38 crc kubenswrapper[4725]: I0120 11:06:38.931740 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:38 crc kubenswrapper[4725]: E0120 11:06:38.931898 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:39 crc kubenswrapper[4725]: I0120 11:06:39.931640 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:39 crc kubenswrapper[4725]: I0120 11:06:39.931696 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:39 crc kubenswrapper[4725]: E0120 11:06:39.931791 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:39 crc kubenswrapper[4725]: I0120 11:06:39.931696 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:39 crc kubenswrapper[4725]: E0120 11:06:39.931876 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:39 crc kubenswrapper[4725]: E0120 11:06:39.931961 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:40 crc kubenswrapper[4725]: I0120 11:06:40.931316 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:40 crc kubenswrapper[4725]: E0120 11:06:40.931460 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:41 crc kubenswrapper[4725]: I0120 11:06:41.932130 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:41 crc kubenswrapper[4725]: I0120 11:06:41.932157 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:41 crc kubenswrapper[4725]: I0120 11:06:41.932121 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:41 crc kubenswrapper[4725]: E0120 11:06:41.932360 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:41 crc kubenswrapper[4725]: E0120 11:06:41.932514 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:41 crc kubenswrapper[4725]: E0120 11:06:41.932619 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:42 crc kubenswrapper[4725]: I0120 11:06:42.932605 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:42 crc kubenswrapper[4725]: E0120 11:06:42.935144 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.013208 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:43 crc kubenswrapper[4725]: I0120 11:06:43.931470 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:43 crc kubenswrapper[4725]: I0120 11:06:43.931495 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.931663 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.931773 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:43 crc kubenswrapper[4725]: I0120 11:06:43.931516 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.931874 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:44 crc kubenswrapper[4725]: I0120 11:06:44.931354 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:44 crc kubenswrapper[4725]: E0120 11:06:44.931496 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:45 crc kubenswrapper[4725]: I0120 11:06:45.931906 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:45 crc kubenswrapper[4725]: I0120 11:06:45.931963 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:45 crc kubenswrapper[4725]: I0120 11:06:45.931898 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:45 crc kubenswrapper[4725]: E0120 11:06:45.932205 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:45 crc kubenswrapper[4725]: E0120 11:06:45.932379 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:45 crc kubenswrapper[4725]: E0120 11:06:45.932597 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:46 crc kubenswrapper[4725]: I0120 11:06:46.931680 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:46 crc kubenswrapper[4725]: E0120 11:06:46.931970 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:47 crc kubenswrapper[4725]: I0120 11:06:47.931764 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:47 crc kubenswrapper[4725]: E0120 11:06:47.931985 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:47 crc kubenswrapper[4725]: I0120 11:06:47.931798 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:47 crc kubenswrapper[4725]: E0120 11:06:47.932160 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:47 crc kubenswrapper[4725]: I0120 11:06:47.931772 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:47 crc kubenswrapper[4725]: E0120 11:06:47.932255 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:48 crc kubenswrapper[4725]: E0120 11:06:48.014412 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:48 crc kubenswrapper[4725]: I0120 11:06:48.931627 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:48 crc kubenswrapper[4725]: E0120 11:06:48.932124 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.931832 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.931928 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:49 crc kubenswrapper[4725]: E0120 11:06:49.932183 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.932219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:49 crc kubenswrapper[4725]: E0120 11:06:49.932310 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:49 crc kubenswrapper[4725]: E0120 11:06:49.932435 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.932738 4725 scope.go:117] "RemoveContainer" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.920454 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.920575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5"} Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.933718 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:50 crc kubenswrapper[4725]: E0120 11:06:50.934012 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.936174 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.926232 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.929461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.930036 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.931221 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.931228 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:51 crc kubenswrapper[4725]: E0120 11:06:51.931332 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.931233 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:51 crc kubenswrapper[4725]: E0120 11:06:51.931427 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:51 crc kubenswrapper[4725]: E0120 11:06:51.931613 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:52 crc kubenswrapper[4725]: I0120 11:06:52.007999 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podStartSLOduration=118.007979551 podStartE2EDuration="1m58.007979551s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:52.007887508 +0000 UTC m=+140.216209581" watchObservedRunningTime="2026-01-20 11:06:52.007979551 +0000 UTC m=+140.216301544" Jan 20 11:06:52 crc kubenswrapper[4725]: I0120 11:06:52.009162 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5lfc4"] Jan 20 11:06:52 crc kubenswrapper[4725]: I0120 11:06:52.009283 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:52 crc kubenswrapper[4725]: E0120 11:06:52.009394 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.015767 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931828 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931895 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931895 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932788 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932833 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932852 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932861 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931615 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931685 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931717 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931779 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.932870 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.932968 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.933057 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.933151 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:56 crc kubenswrapper[4725]: I0120 11:06:56.727515 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:06:56 crc kubenswrapper[4725]: I0120 11:06:56.727664 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931406 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931496 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931535 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931424 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931638 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931669 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931795 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.473367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.527411 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-twkw7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.528149 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.530311 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhz9f"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.530957 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.531255 4725 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.531317 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.532680 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.533297 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.534265 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.534667 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.535628 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5fgr9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.536207 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.536995 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.537658 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.538608 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.539116 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541487 4725 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541533 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541543 4725 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541602 4725 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541618 4725 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541627 4725 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541615 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541647 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541649 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541667 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541631 4725 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541686 4725 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541707 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541718 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541543 4725 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541757 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541783 4725 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541813 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.542007 4725 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.542034 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.543217 4725 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.543254 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.543885 4725 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.543914 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.543992 4725 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.544014 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.544990 4725 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.545033 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.547250 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-75nfb"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.547899 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.549109 4725 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.549159 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.551755 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.552646 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.553735 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.554357 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.554994 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.555790 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.556732 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.557175 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.560135 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.560706 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.560803 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.561396 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2hmdd"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.561751 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.562030 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.562529 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g28q4"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.563114 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.564812 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vc6c2"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.565597 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.565711 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.566363 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.570733 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5fj5p"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.571728 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572704 4725 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572758 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572773 4725 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572808 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572708 4725 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572831 4725 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572888 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572841 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573132 4725 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573140 4725 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573163 4725 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573164 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573164 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573185 4725 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573216 4725 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573234 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573192 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573253 4725 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573282 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573251 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573357 4725 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573380 4725 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: secrets "v4-0-config-system-ocp-branding-template" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573386 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573405 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573517 4725 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573538 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573615 4725 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573630 4725 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573634 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573646 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.573734 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.573758 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.574706 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.575014 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.575255 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.575718 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nxchh"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.578211 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.580671 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.580674 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.580836 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581028 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581266 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581410 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581617 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581825 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581882 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581836 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.582116 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.582538 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.593895 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.596832 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.596871 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.597344 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.597792 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.598347 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599105 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599134 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599217 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599300 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599349 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599464 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599510 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599463 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599594 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599628 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599643 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599760 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599796 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599878 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599873 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599937 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599976 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599996 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599882 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599944 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600068 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600094 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600134 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599904 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600210 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600225 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600244 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600327 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600405 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600501 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600505 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600574 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600588 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600651 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600681 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600731 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600743 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600765 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600830 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600838 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601009 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601032 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601140 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601689 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601808 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601827 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.602612 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603015 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603282 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603576 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.604122 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.604443 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.604462 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.605910 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.609563 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.610363 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.610979 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.611644 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.613922 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.616296 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.616566 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.616714 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.617338 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.617964 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.618916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.622039 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.623127 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.623229 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.628951 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.629649 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.630460 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.634309 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.634976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.635415 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.638151 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.640037 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.641553 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.644334 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.644764 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.645945 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.664043 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665472 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-encryption-config\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65hm8\" (UniqueName: \"kubernetes.io/projected/2216efbd-f6b4-4579-a94a-18c5177df641-kube-api-access-65hm8\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665570 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665648 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-client\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665692 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2216efbd-f6b4-4579-a94a-18c5177df641-audit-dir\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665756 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-audit-policies\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-serving-cert\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665801 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.666012 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psvt7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.666947 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.668484 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7j2sn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.669556 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhz9f"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.669671 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.671420 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.674323 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.675036 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vt8w"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.676148 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.676260 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.676772 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.686709 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-twkw7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.688271 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.691573 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.693773 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5fgr9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.703234 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.705397 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hmdd"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.705453 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.706717 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.711952 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.715236 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.715289 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.722364 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.723404 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.724417 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.725398 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g28q4"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.726451 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x85nm"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.727179 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x85nm" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.727641 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.729025 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.729752 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.731736 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.732758 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.733960 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.735150 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-75nfb"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.736435 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.737491 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.738335 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.738518 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.739776 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.741276 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5fj5p"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.742262 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.742991 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.743914 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vc6c2"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.748686 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.749934 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7j2sn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.752583 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vt8w"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.753741 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.753799 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x85nm"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.808933 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.809632 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.810284 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.812251 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814794 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-audit-policies\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814838 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-serving-cert\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814857 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-encryption-config\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814968 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65hm8\" (UniqueName: \"kubernetes.io/projected/2216efbd-f6b4-4579-a94a-18c5177df641-kube-api-access-65hm8\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814993 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815147 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2216efbd-f6b4-4579-a94a-18c5177df641-audit-dir\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-client\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815888 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-audit-policies\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815971 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psvt7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.816607 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.817211 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.817239 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2216efbd-f6b4-4579-a94a-18c5177df641-audit-dir\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.819339 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kkxct"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.820704 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.821332 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-encryption-config\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.821993 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-serving-cert\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.822136 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4s7gv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.824156 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-client\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.824435 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.825294 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4s7gv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.832817 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.851004 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.870575 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.891182 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.910548 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931263 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931316 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931227 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.932144 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.951158 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.970661 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.990172 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.036818 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.050565 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.070024 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.090778 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.111049 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.132154 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.150840 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.171336 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.190843 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.211489 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.230658 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.251451 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.270402 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.291855 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.310503 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.331399 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.350565 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.370997 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.390689 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.411572 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.431508 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.451517 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.470345 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.491127 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.510903 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.531301 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.550788 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.571257 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.590861 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.611262 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.628873 4725 request.go:700] Waited for 1.016918424s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.631179 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.652290 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.672027 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.701492 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.710449 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.731206 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.749922 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.770864 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.790581 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.831817 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.850701 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.870719 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.890606 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.911415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.930763 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.950810 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.971268 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.990592 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.010205 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.031241 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.051870 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.071244 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.090451 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.110661 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.131583 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.151133 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.172429 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.192374 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.260924 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.262855 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.263095 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.269977 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.290981 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.311435 4725 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.330855 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.351147 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.370938 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.391051 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.410185 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.446443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65hm8\" (UniqueName: \"kubernetes.io/projected/2216efbd-f6b4-4579-a94a-18c5177df641-kube-api-access-65hm8\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.451886 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.471262 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.490730 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.510865 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.530284 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.537318 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.552107 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.571502 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.590972 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.611316 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.628892 4725 request.go:700] Waited for 1.697253475s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.631332 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.653014 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.671342 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.691375 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.730266 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.751006 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.770976 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.790113 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.799787 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g"] Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.810668 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.854766 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.855051 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.871680 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.890375 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.911341 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.930548 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.956691 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.967594 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:01 crc kubenswrapper[4725]: E0120 11:07:01.967862 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:09:03.967825847 +0000 UTC m=+272.176147860 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.967995 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" event={"ID":"2216efbd-f6b4-4579-a94a-18c5177df641","Type":"ContainerStarted","Data":"1f112b92d92e3d2506761e631a32c75251786020111776fd88a51ae894fe2f06"} Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.970359 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.008598 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.010020 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.030975 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.070753 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.091245 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.110614 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.131690 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.151069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.170783 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.191580 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.211782 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.230794 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.256874 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.283962 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350188 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350274 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350313 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350323 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350586 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350660 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350708 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350790 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.351214 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.351371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.351532 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.352227 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.852206207 +0000 UTC m=+151.060528200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.352560 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.353116 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.354658 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.357335 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.357521 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.358093 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.452876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453174 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wdq5\" (UniqueName: \"kubernetes.io/projected/7f131da2-d815-48eb-b2ab-7f6df6a4039a-kube-api-access-6wdq5\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.453305 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.953208322 +0000 UTC m=+151.161530295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453401 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-auth-proxy-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453462 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453484 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453531 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-metrics-tls\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.455722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5m4f\" (UniqueName: \"kubernetes.io/projected/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-kube-api-access-w5m4f\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456027 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpntm\" (UniqueName: \"kubernetes.io/projected/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-kube-api-access-tpntm\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456238 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-service-ca\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456323 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2efafa7a-ca64-4166-a72b-9b70b86953ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456363 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dqmj\" (UniqueName: \"kubernetes.io/projected/2efafa7a-ca64-4166-a72b-9b70b86953ad-kube-api-access-6dqmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456387 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2efafa7a-ca64-4166-a72b-9b70b86953ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456424 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-serving-cert\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456493 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456595 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-serving-cert\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456615 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456671 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkjz\" (UniqueName: \"kubernetes.io/projected/e3e30f02-3956-427a-a1f3-6e1d51f242d6-kube-api-access-rpkjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456688 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-metrics-certs\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456710 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456726 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456745 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456885 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df8c05f-b523-439b-908b-c4f34b22b7e9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456908 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457007 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwl4m\" (UniqueName: \"kubernetes.io/projected/808fb947-228d-42c4-ba11-480348f80d8a-kube-api-access-lwl4m\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457023 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wmnh\" (UniqueName: \"kubernetes.io/projected/ac3b56d0-256f-40f8-b2ff-2271f82ff750-kube-api-access-2wmnh\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457108 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457127 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-config\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457530 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457599 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457653 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-config\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457698 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-node-pullsecrets\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457744 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd48\" (UniqueName: \"kubernetes.io/projected/cf2d94b1-aa78-4a9d-8e32-232f92ec8988-kube-api-access-qbd48\") pod \"migrator-59844c95c7-rlw62\" (UID: \"cf2d94b1-aa78-4a9d-8e32-232f92ec8988\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457939 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzxg\" (UniqueName: \"kubernetes.io/projected/cb0c9cf6-4966-4bd0-8933-823bc00e103c-kube-api-access-2gzxg\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458061 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-trusted-ca-bundle\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458108 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458157 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f27b4eea-081e-421a-83e9-8a5266163c53-serving-cert\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458181 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458204 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458244 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458440 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdplr\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-kube-api-access-qdplr\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458567 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-config\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458666 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458700 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-config\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458744 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458778 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb63abc7-f429-46c5-aa23-259063c394d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458798 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458819 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-default-certificate\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458842 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-client\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458862 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fg8\" (UniqueName: \"kubernetes.io/projected/d19058e6-30ec-474e-bada-73b4981a9b65-kube-api-access-75fg8\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458881 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-stats-auth\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458918 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-oauth-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458939 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-trusted-ca\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458984 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58svc\" (UniqueName: \"kubernetes.io/projected/a8d4d608-4f73-4365-a535-71e712884eb9-kube-api-access-58svc\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459008 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459027 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-oauth-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459053 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e30f02-3956-427a-a1f3-6e1d51f242d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459200 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8d4d608-4f73-4365-a535-71e712884eb9-proxy-tls\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-serving-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459277 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-config\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459334 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df8c05f-b523-439b-908b-c4f34b22b7e9-proxy-tls\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459461 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459613 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459680 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460543 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvpn6\" (UniqueName: \"kubernetes.io/projected/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-kube-api-access-rvpn6\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460594 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460621 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460646 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19058e6-30ec-474e-bada-73b4981a9b65-service-ca-bundle\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460666 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d0ff97b-8da9-4156-a78b-9ebd6886313f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460700 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460719 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460740 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57xsn\" (UniqueName: \"kubernetes.io/projected/4df8c05f-b523-439b-908b-c4f34b22b7e9-kube-api-access-57xsn\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460762 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-images\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d0ff97b-8da9-4156-a78b-9ebd6886313f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460832 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460852 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-serving-cert\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460872 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit-dir\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-service-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460935 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/808fb947-228d-42c4-ba11-480348f80d8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460957 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460982 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461005 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461032 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb63abc7-f429-46c5-aa23-259063c394d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461056 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-images\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461099 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461125 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-machine-approver-tls\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461175 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5wc\" (UniqueName: \"kubernetes.io/projected/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-kube-api-access-rh5wc\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461219 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvfh\" (UniqueName: \"kubernetes.io/projected/f27b4eea-081e-421a-83e9-8a5266163c53-kube-api-access-frvfh\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461251 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3e30f02-3956-427a-a1f3-6e1d51f242d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461270 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-config\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461313 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kqz\" (UniqueName: \"kubernetes.io/projected/6c5d8a1b-5c54-4877-8739-a83ab530197d-kube-api-access-c4kqz\") pod \"downloads-7954f5f757-2hmdd\" (UID: \"6c5d8a1b-5c54-4877-8739-a83ab530197d\") " pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461333 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-srv-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461357 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461403 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461423 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396ed454-f2c7-483a-8aad-0953041099b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461511 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqvr\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-kube-api-access-dkqvr\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461547 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-service-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461571 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9ww\" (UniqueName: \"kubernetes.io/projected/396ed454-f2c7-483a-8aad-0953041099b5-kube-api-access-9t9ww\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461607 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-image-import-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461626 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkllc\" (UniqueName: \"kubernetes.io/projected/b8859d17-62ea-47b3-ac63-537e69ec9f90-kube-api-access-gkllc\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461651 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461691 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-serving-cert\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462135 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462140 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462161 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-client\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462255 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-encryption-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462330 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.462396 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.962377771 +0000 UTC m=+151.170699794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462393 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462473 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghgt\" (UniqueName: \"kubernetes.io/projected/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-kube-api-access-cghgt\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463249 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463248 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463328 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396ed454-f2c7-483a-8aad-0953041099b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463417 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.468410 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.469999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.483617 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.486356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.500126 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.529005 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.584569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.584862 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.084843412 +0000 UTC m=+151.293165385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586355 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586406 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-cabundle\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xsn\" (UniqueName: \"kubernetes.io/projected/4df8c05f-b523-439b-908b-c4f34b22b7e9-kube-api-access-57xsn\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586458 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-images\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586489 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d0ff97b-8da9-4156-a78b-9ebd6886313f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586511 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-serving-cert\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586558 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit-dir\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586584 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222f710d-f6a2-48e7-9175-55b50f3aba30-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586609 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-service-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/808fb947-228d-42c4-ba11-480348f80d8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586682 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586706 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb63abc7-f429-46c5-aa23-259063c394d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586731 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eca1f8da-59f2-404e-a5e0-dbe1a191b885-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-images\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586776 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e1eba244-7c59-4933-ad4c-5dccc8fdc854-tmpfs\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586800 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586823 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-machine-approver-tls\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5wc\" (UniqueName: \"kubernetes.io/projected/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-kube-api-access-rh5wc\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586912 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwbw\" (UniqueName: \"kubernetes.io/projected/29ff5711-1e81-4ed0-8acd-6124100de37d-kube-api-access-2kwbw\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586933 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frvfh\" (UniqueName: \"kubernetes.io/projected/f27b4eea-081e-421a-83e9-8a5266163c53-kube-api-access-frvfh\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3e30f02-3956-427a-a1f3-6e1d51f242d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586977 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-config\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587003 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4kqz\" (UniqueName: \"kubernetes.io/projected/6c5d8a1b-5c54-4877-8739-a83ab530197d-kube-api-access-c4kqz\") pod \"downloads-7954f5f757-2hmdd\" (UID: \"6c5d8a1b-5c54-4877-8739-a83ab530197d\") " pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587026 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-srv-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587056 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587102 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587128 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db710f25-e573-414c-9129-0dfa945d0b71-metrics-tls\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587153 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587178 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587236 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396ed454-f2c7-483a-8aad-0953041099b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587287 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmvj\" (UniqueName: \"kubernetes.io/projected/1bb3a268-d628-4c34-b9ca-38d43d82bf86-kube-api-access-7hmvj\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587309 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587331 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkqvr\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-kube-api-access-dkqvr\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587374 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-service-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587394 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-apiservice-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587421 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9ww\" (UniqueName: \"kubernetes.io/projected/396ed454-f2c7-483a-8aad-0953041099b5-kube-api-access-9t9ww\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587444 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlqg\" (UniqueName: \"kubernetes.io/projected/8428545d-e40d-4259-b579-ce7bff401888-kube-api-access-7rlqg\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587466 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-mountpoint-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ff5711-1e81-4ed0-8acd-6124100de37d-config\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587515 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-image-import-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587538 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkllc\" (UniqueName: \"kubernetes.io/projected/b8859d17-62ea-47b3-ac63-537e69ec9f90-kube-api-access-gkllc\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587561 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-serving-cert\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587583 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9nk\" (UniqueName: \"kubernetes.io/projected/876f0761-c4c3-42f7-81f8-9a26071a7676-kube-api-access-nc9nk\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587604 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f51665c-048e-4625-846b-872a367664e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587627 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587648 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-client\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587667 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-encryption-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587691 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587713 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghgt\" (UniqueName: \"kubernetes.io/projected/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-kube-api-access-cghgt\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587734 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587754 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396ed454-f2c7-483a-8aad-0953041099b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587801 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587822 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wdq5\" (UniqueName: \"kubernetes.io/projected/7f131da2-d815-48eb-b2ab-7f6df6a4039a-kube-api-access-6wdq5\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-auth-proxy-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587915 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587937 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-metrics-tls\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587958 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5m4f\" (UniqueName: \"kubernetes.io/projected/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-kube-api-access-w5m4f\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587979 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-srv-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpntm\" (UniqueName: \"kubernetes.io/projected/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-kube-api-access-tpntm\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588093 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-plugins-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588133 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-service-ca\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588157 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2efafa7a-ca64-4166-a72b-9b70b86953ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dqmj\" (UniqueName: \"kubernetes.io/projected/2efafa7a-ca64-4166-a72b-9b70b86953ad-kube-api-access-6dqmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588220 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcnj7\" (UniqueName: \"kubernetes.io/projected/eca1f8da-59f2-404e-a5e0-dbe1a191b885-kube-api-access-zcnj7\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2efafa7a-ca64-4166-a72b-9b70b86953ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-serving-cert\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588346 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b07c5d50-bb91-412d-b86a-3d736a16a81d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588374 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-registration-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588400 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-serving-cert\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588422 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588456 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588479 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpkjz\" (UniqueName: \"kubernetes.io/projected/e3e30f02-3956-427a-a1f3-6e1d51f242d6-kube-api-access-rpkjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588511 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db710f25-e573-414c-9129-0dfa945d0b71-config-volume\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588582 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-metrics-certs\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-csi-data-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588631 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-certs\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588679 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ff5711-1e81-4ed0-8acd-6124100de37d-serving-cert\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588743 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588767 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df8c05f-b523-439b-908b-c4f34b22b7e9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588789 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzb8\" (UniqueName: \"kubernetes.io/projected/e1eba244-7c59-4933-ad4c-5dccc8fdc854-kube-api-access-fkzb8\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588813 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588841 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwl4m\" (UniqueName: \"kubernetes.io/projected/808fb947-228d-42c4-ba11-480348f80d8a-kube-api-access-lwl4m\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588864 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wmnh\" (UniqueName: \"kubernetes.io/projected/ac3b56d0-256f-40f8-b2ff-2271f82ff750-kube-api-access-2wmnh\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588888 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588913 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-config\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588937 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588962 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588988 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-config\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589012 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqpd\" (UniqueName: \"kubernetes.io/projected/6023e844-87d6-4f4d-bf86-a685b937cda5-kube-api-access-bbqpd\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589037 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-node-pullsecrets\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589059 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6023e844-87d6-4f4d-bf86-a685b937cda5-cert\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589097 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjkl\" (UniqueName: \"kubernetes.io/projected/db710f25-e573-414c-9129-0dfa945d0b71-kube-api-access-vsjkl\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589120 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfs4\" (UniqueName: \"kubernetes.io/projected/38cb64e1-bd23-43eb-9eae-7c05f040640b-kube-api-access-dpfs4\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589145 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbd48\" (UniqueName: \"kubernetes.io/projected/cf2d94b1-aa78-4a9d-8e32-232f92ec8988-kube-api-access-qbd48\") pod \"migrator-59844c95c7-rlw62\" (UID: \"cf2d94b1-aa78-4a9d-8e32-232f92ec8988\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589173 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gzxg\" (UniqueName: \"kubernetes.io/projected/cb0c9cf6-4966-4bd0-8933-823bc00e103c-kube-api-access-2gzxg\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589218 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-trusted-ca-bundle\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589254 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-node-bootstrap-token\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589281 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhjr\" (UniqueName: \"kubernetes.io/projected/3f51665c-048e-4625-846b-872a367664e5-kube-api-access-nfhjr\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589302 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-key\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589329 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f27b4eea-081e-421a-83e9-8a5266163c53-serving-cert\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589382 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589410 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdkh9\" (UniqueName: \"kubernetes.io/projected/b07c5d50-bb91-412d-b86a-3d736a16a81d-kube-api-access-tdkh9\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589438 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589464 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdplr\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-kube-api-access-qdplr\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589501 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-config\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589523 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-config\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589607 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-socket-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589631 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb63abc7-f429-46c5-aa23-259063c394d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589679 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-default-certificate\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589704 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-client\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589728 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fg8\" (UniqueName: \"kubernetes.io/projected/d19058e6-30ec-474e-bada-73b4981a9b65-kube-api-access-75fg8\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-stats-auth\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589776 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-oauth-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589799 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-trusted-ca\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589824 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589846 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58svc\" (UniqueName: \"kubernetes.io/projected/a8d4d608-4f73-4365-a535-71e712884eb9-kube-api-access-58svc\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589872 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-oauth-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589921 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e30f02-3956-427a-a1f3-6e1d51f242d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589947 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589973 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590002 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8d4d608-4f73-4365-a535-71e712884eb9-proxy-tls\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590025 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-serving-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-config\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df8c05f-b523-439b-908b-c4f34b22b7e9-proxy-tls\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590131 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590156 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590209 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvpn6\" (UniqueName: \"kubernetes.io/projected/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-kube-api-access-rvpn6\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590235 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222f710d-f6a2-48e7-9175-55b50f3aba30-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590259 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222f710d-f6a2-48e7-9175-55b50f3aba30-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590289 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590342 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19058e6-30ec-474e-bada-73b4981a9b65-service-ca-bundle\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d0ff97b-8da9-4156-a78b-9ebd6886313f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590402 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-webhook-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590425 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.591308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.591547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.592299 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.592678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df8c05f-b523-439b-908b-c4f34b22b7e9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.593361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.593828 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-images\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.594913 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.595145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.595194 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-trusted-ca\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.595747 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-auth-proxy-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.596150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.596469 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.596712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-image-import-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597069 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8d4d608-4f73-4365-a535-71e712884eb9-proxy-tls\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597193 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-config\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597327 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb63abc7-f429-46c5-aa23-259063c394d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597776 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597960 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-serving-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.598381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.598695 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-config\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.598922 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.613410 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-service-ca\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.613971 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2efafa7a-ca64-4166-a72b-9b70b86953ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.614389 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2efafa7a-ca64-4166-a72b-9b70b86953ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.619382 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.619630 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.645706 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.647009 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-config\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.647295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-node-pullsecrets\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.648538 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-trusted-ca-bundle\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.649492 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.650205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396ed454-f2c7-483a-8aad-0953041099b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651526 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xsn\" (UniqueName: \"kubernetes.io/projected/4df8c05f-b523-439b-908b-c4f34b22b7e9-kube-api-access-57xsn\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651607 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-serving-cert\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651768 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652025 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-serving-cert\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652272 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-metrics-certs\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652431 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-serving-cert\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652446 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652499 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-default-certificate\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652749 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d0ff97b-8da9-4156-a78b-9ebd6886313f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653028 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653112 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-serving-cert\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653254 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-config\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653365 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-service-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653736 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df8c05f-b523-439b-908b-c4f34b22b7e9-proxy-tls\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653882 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.654042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-images\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.654307 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.655455 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-srv-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.655523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-machine-approver-tls\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.655544 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.657424 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.657975 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-client\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-oauth-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658335 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658352 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit-dir\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.658662 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.15864934 +0000 UTC m=+151.366971313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658806 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3e30f02-3956-427a-a1f3-6e1d51f242d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e30f02-3956-427a-a1f3-6e1d51f242d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659571 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19058e6-30ec-474e-bada-73b4981a9b65-service-ca-bundle\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659643 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f27b4eea-081e-421a-83e9-8a5266163c53-serving-cert\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.660226 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d0ff97b-8da9-4156-a78b-9ebd6886313f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.660846 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-config\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661450 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661566 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661801 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-config\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661774 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661929 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661997 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.662598 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.662817 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-metrics-tls\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.663762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-service-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.663880 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/808fb947-228d-42c4-ba11-480348f80d8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.664785 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.664892 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665359 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpkjz\" (UniqueName: \"kubernetes.io/projected/e3e30f02-3956-427a-a1f3-6e1d51f242d6-kube-api-access-rpkjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665771 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665777 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396ed454-f2c7-483a-8aad-0953041099b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665839 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-client\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665957 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665987 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-stats-auth\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.666426 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb63abc7-f429-46c5-aa23-259063c394d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.666544 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-encryption-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667524 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwl4m\" (UniqueName: \"kubernetes.io/projected/808fb947-228d-42c4-ba11-480348f80d8a-kube-api-access-lwl4m\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667629 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667870 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wdq5\" (UniqueName: \"kubernetes.io/projected/7f131da2-d815-48eb-b2ab-7f6df6a4039a-kube-api-access-6wdq5\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.668048 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-oauth-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.668916 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.684935 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wmnh\" (UniqueName: \"kubernetes.io/projected/ac3b56d0-256f-40f8-b2ff-2271f82ff750-kube-api-access-2wmnh\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.690929 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.691104 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.191064512 +0000 UTC m=+151.399386485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691205 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdkh9\" (UniqueName: \"kubernetes.io/projected/b07c5d50-bb91-412d-b86a-3d736a16a81d-kube-api-access-tdkh9\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691271 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-socket-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691335 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222f710d-f6a2-48e7-9175-55b50f3aba30-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691363 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222f710d-f6a2-48e7-9175-55b50f3aba30-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691395 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691429 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-webhook-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691453 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691477 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-cabundle\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222f710d-f6a2-48e7-9175-55b50f3aba30-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691541 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eca1f8da-59f2-404e-a5e0-dbe1a191b885-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691569 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e1eba244-7c59-4933-ad4c-5dccc8fdc854-tmpfs\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691625 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwbw\" (UniqueName: \"kubernetes.io/projected/29ff5711-1e81-4ed0-8acd-6124100de37d-kube-api-access-2kwbw\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db710f25-e573-414c-9129-0dfa945d0b71-metrics-tls\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691698 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmvj\" (UniqueName: \"kubernetes.io/projected/1bb3a268-d628-4c34-b9ca-38d43d82bf86-kube-api-access-7hmvj\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691728 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-apiservice-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691760 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlqg\" (UniqueName: \"kubernetes.io/projected/8428545d-e40d-4259-b579-ce7bff401888-kube-api-access-7rlqg\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691782 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-mountpoint-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691806 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ff5711-1e81-4ed0-8acd-6124100de37d-config\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nk\" (UniqueName: \"kubernetes.io/projected/876f0761-c4c3-42f7-81f8-9a26071a7676-kube-api-access-nc9nk\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691901 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f51665c-048e-4625-846b-872a367664e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691960 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-srv-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691991 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-plugins-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692018 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcnj7\" (UniqueName: \"kubernetes.io/projected/eca1f8da-59f2-404e-a5e0-dbe1a191b885-kube-api-access-zcnj7\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b07c5d50-bb91-412d-b86a-3d736a16a81d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-registration-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692137 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db710f25-e573-414c-9129-0dfa945d0b71-config-volume\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692164 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-csi-data-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692190 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-certs\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692218 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ff5711-1e81-4ed0-8acd-6124100de37d-serving-cert\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692243 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzb8\" (UniqueName: \"kubernetes.io/projected/e1eba244-7c59-4933-ad4c-5dccc8fdc854-kube-api-access-fkzb8\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692277 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqpd\" (UniqueName: \"kubernetes.io/projected/6023e844-87d6-4f4d-bf86-a685b937cda5-kube-api-access-bbqpd\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692296 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6023e844-87d6-4f4d-bf86-a685b937cda5-cert\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692316 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsjkl\" (UniqueName: \"kubernetes.io/projected/db710f25-e573-414c-9129-0dfa945d0b71-kube-api-access-vsjkl\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692355 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfs4\" (UniqueName: \"kubernetes.io/projected/38cb64e1-bd23-43eb-9eae-7c05f040640b-kube-api-access-dpfs4\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692397 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-node-bootstrap-token\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692421 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfhjr\" (UniqueName: \"kubernetes.io/projected/3f51665c-048e-4625-846b-872a367664e5-kube-api-access-nfhjr\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692444 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-key\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692616 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-cabundle\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.693662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222f710d-f6a2-48e7-9175-55b50f3aba30-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.694386 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-registration-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.694641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e1eba244-7c59-4933-ad4c-5dccc8fdc854-tmpfs\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695348 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222f710d-f6a2-48e7-9175-55b50f3aba30-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695426 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-mountpoint-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695573 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db710f25-e573-414c-9129-0dfa945d0b71-config-volume\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-csi-data-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.696638 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.196621377 +0000 UTC m=+151.404943420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691542 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-socket-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.698236 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-plugins-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.702795 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-certs\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.704095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b07c5d50-bb91-412d-b86a-3d736a16a81d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.704095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-webhook-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.704703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.705618 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ff5711-1e81-4ed0-8acd-6124100de37d-config\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.717744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db710f25-e573-414c-9129-0dfa945d0b71-metrics-tls\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.720602 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6023e844-87d6-4f4d-bf86-a685b937cda5-cert\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.720771 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f51665c-048e-4625-846b-872a367664e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.725923 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkllc\" (UniqueName: \"kubernetes.io/projected/b8859d17-62ea-47b3-ac63-537e69ec9f90-kube-api-access-gkllc\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.769627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dqmj\" (UniqueName: \"kubernetes.io/projected/2efafa7a-ca64-4166-a72b-9b70b86953ad-kube-api-access-6dqmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.778773 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.787334 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpntm\" (UniqueName: \"kubernetes.io/projected/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-kube-api-access-tpntm\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.793289 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.793815 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.293801561 +0000 UTC m=+151.502123534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.796369 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-apiservice-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.796592 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-node-bootstrap-token\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.797927 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ff5711-1e81-4ed0-8acd-6124100de37d-serving-cert\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.798413 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-key\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.798453 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.802277 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-srv-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.802694 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eca1f8da-59f2-404e-a5e0-dbe1a191b885-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.802793 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbd48\" (UniqueName: \"kubernetes.io/projected/cf2d94b1-aa78-4a9d-8e32-232f92ec8988-kube-api-access-qbd48\") pod \"migrator-59844c95c7-rlw62\" (UID: \"cf2d94b1-aa78-4a9d-8e32-232f92ec8988\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.824590 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gzxg\" (UniqueName: \"kubernetes.io/projected/cb0c9cf6-4966-4bd0-8933-823bc00e103c-kube-api-access-2gzxg\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.835781 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.843918 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.857771 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.866920 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.894820 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.895464 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.395448496 +0000 UTC m=+151.603770469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.896990 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.911749 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.913239 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghgt\" (UniqueName: \"kubernetes.io/projected/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-kube-api-access-cghgt\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.914376 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58svc\" (UniqueName: \"kubernetes.io/projected/a8d4d608-4f73-4365-a535-71e712884eb9-kube-api-access-58svc\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.924362 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdplr\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-kube-api-access-qdplr\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.924647 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.927884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fg8\" (UniqueName: \"kubernetes.io/projected/d19058e6-30ec-474e-bada-73b4981a9b65-kube-api-access-75fg8\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.930263 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.941697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.951782 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.957758 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.969943 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5m4f\" (UniqueName: \"kubernetes.io/projected/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-kube-api-access-w5m4f\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.974234 4725 generic.go:334] "Generic (PLEG): container finished" podID="2216efbd-f6b4-4579-a94a-18c5177df641" containerID="a2f8507fc61c358dce5dbe25990d21561714c310b806b6e3d18b1c5aa921714c" exitCode=0 Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.974271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" event={"ID":"2216efbd-f6b4-4579-a94a-18c5177df641","Type":"ContainerDied","Data":"a2f8507fc61c358dce5dbe25990d21561714c310b806b6e3d18b1c5aa921714c"} Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.978059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.996415 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.997045 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.497023189 +0000 UTC m=+151.705345152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.015178 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.020323 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.025612 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.098535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.105042 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.605022924 +0000 UTC m=+151.813344907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.124446 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5wc\" (UniqueName: \"kubernetes.io/projected/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-kube-api-access-rh5wc\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.131040 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frvfh\" (UniqueName: \"kubernetes.io/projected/f27b4eea-081e-421a-83e9-8a5266163c53-kube-api-access-frvfh\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.136499 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.138614 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.141225 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4kqz\" (UniqueName: \"kubernetes.io/projected/6c5d8a1b-5c54-4877-8739-a83ab530197d-kube-api-access-c4kqz\") pod \"downloads-7954f5f757-2hmdd\" (UID: \"6c5d8a1b-5c54-4877-8739-a83ab530197d\") " pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.142456 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.150448 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.157680 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9ww\" (UniqueName: \"kubernetes.io/projected/396ed454-f2c7-483a-8aad-0953041099b5-kube-api-access-9t9ww\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.167501 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.221111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.221509 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.221541 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.224034 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.724002336 +0000 UTC m=+151.932324309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.224205 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.225141 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.225597 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.725581165 +0000 UTC m=+151.933903138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.240612 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdkh9\" (UniqueName: \"kubernetes.io/projected/b07c5d50-bb91-412d-b86a-3d736a16a81d-kube-api-access-tdkh9\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.242869 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.244019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.247828 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.248634 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkqvr\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-kube-api-access-dkqvr\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.249417 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.249874 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.250109 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.277258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.282541 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvpn6\" (UniqueName: \"kubernetes.io/projected/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-kube-api-access-rvpn6\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.295253 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222f710d-f6a2-48e7-9175-55b50f3aba30-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.316195 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.329854 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.330405 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.83038102 +0000 UTC m=+152.038702983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.333629 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqpd\" (UniqueName: \"kubernetes.io/projected/6023e844-87d6-4f4d-bf86-a685b937cda5-kube-api-access-bbqpd\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.340952 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfhjr\" (UniqueName: \"kubernetes.io/projected/3f51665c-048e-4625-846b-872a367664e5-kube-api-access-nfhjr\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.341554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwbw\" (UniqueName: \"kubernetes.io/projected/29ff5711-1e81-4ed0-8acd-6124100de37d-kube-api-access-2kwbw\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.351427 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.371738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsjkl\" (UniqueName: \"kubernetes.io/projected/db710f25-e573-414c-9129-0dfa945d0b71-kube-api-access-vsjkl\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.371987 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.372301 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmvj\" (UniqueName: \"kubernetes.io/projected/1bb3a268-d628-4c34-b9ca-38d43d82bf86-kube-api-access-7hmvj\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.406178 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlqg\" (UniqueName: \"kubernetes.io/projected/8428545d-e40d-4259-b579-ce7bff401888-kube-api-access-7rlqg\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.407395 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.417345 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.427782 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfs4\" (UniqueName: \"kubernetes.io/projected/38cb64e1-bd23-43eb-9eae-7c05f040640b-kube-api-access-dpfs4\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.431735 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.432048 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.932035404 +0000 UTC m=+152.140357377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.441097 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzb8\" (UniqueName: \"kubernetes.io/projected/e1eba244-7c59-4933-ad4c-5dccc8fdc854-kube-api-access-fkzb8\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.459728 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.469799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9nk\" (UniqueName: \"kubernetes.io/projected/876f0761-c4c3-42f7-81f8-9a26071a7676-kube-api-access-nc9nk\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.504730 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.507946 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcnj7\" (UniqueName: \"kubernetes.io/projected/eca1f8da-59f2-404e-a5e0-dbe1a191b885-kube-api-access-zcnj7\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.532683 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.533029 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.033014278 +0000 UTC m=+152.241336251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.563751 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.570201 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.580772 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.586907 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.595324 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.618639 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.619366 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.634295 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.634695 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.134678524 +0000 UTC m=+152.343000497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.680599 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.681527 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.682073 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.784269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.784734 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.284713814 +0000 UTC m=+152.493035787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.885563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.885823 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.385813472 +0000 UTC m=+152.594135445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.986789 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.987302 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.487280362 +0000 UTC m=+152.695602345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.125911 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.126622 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.626609945 +0000 UTC m=+152.834931918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.232246 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.232937 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.732922597 +0000 UTC m=+152.941244570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.266234 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9fb93fadf612c16a3fafc9a8b21d7b94afecd42163dbbdb1a7d80ae2d8e0f73c"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.267414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nxchh" event={"ID":"d19058e6-30ec-474e-bada-73b4981a9b65","Type":"ContainerStarted","Data":"0c38d4674bf7c1beaea3cfdb53f3b8819c62e7ae48d2467ce6b5c8f62cb48fc3"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.269135 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" event={"ID":"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1","Type":"ContainerStarted","Data":"70ef496b54ee860579e36d9d44431303cfe66f2365d9ab45098f33470f21f177"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.294736 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"02080eac58544da25b823b4ef631a4458d792115e11928eb9f6dcce5008672f0"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.360742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.361053 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.861037567 +0000 UTC m=+153.069359540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: W0120 11:07:04.431514 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38cb64e1_bd23_43eb_9eae_7c05f040640b.slice/crio-320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d WatchSource:0}: Error finding container 320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d: Status 404 returned error can't find the container with id 320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.461554 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.461849 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.961834024 +0000 UTC m=+153.170155997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.574652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.575334 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.075321823 +0000 UTC m=+153.283643796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.675972 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.676410 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.17639407 +0000 UTC m=+153.384716033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.801827 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.802239 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.302219087 +0000 UTC m=+153.510541090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.808707 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5"] Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.902995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.903331 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.403291384 +0000 UTC m=+153.611613357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.903450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.904177 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.404166861 +0000 UTC m=+153.612488834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.004380 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.004723 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.504700402 +0000 UTC m=+153.713022375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.110257 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.110633 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.610616761 +0000 UTC m=+153.818938734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.286562 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.286779 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.786748414 +0000 UTC m=+153.995070387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.287448 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.289355 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.788685825 +0000 UTC m=+153.997007798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.300185 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" event={"ID":"2216efbd-f6b4-4579-a94a-18c5177df641","Type":"ContainerStarted","Data":"ea3e31ced5d335052e2b41c8aeaafdb835975b5e6cd58067d45fc0c387cc3f26"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.302700 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"191a80d1f7d8bfa4554dcb5899e3b714f5cbd9f67af9d4d632c67d8927e8f2ea"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.303489 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.314044 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nxchh" event={"ID":"d19058e6-30ec-474e-bada-73b4981a9b65","Type":"ContainerStarted","Data":"43c681a5995d3854b44911ef1c1d6ce4a7c57dbe4132c1c823f912e6e2e80735"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.317071 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkxct" event={"ID":"38cb64e1-bd23-43eb-9eae-7c05f040640b","Type":"ContainerStarted","Data":"320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.392784 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" podStartSLOduration=129.392767097 podStartE2EDuration="2m9.392767097s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:05.392528759 +0000 UTC m=+153.600850732" watchObservedRunningTime="2026-01-20 11:07:05.392767097 +0000 UTC m=+153.601089070" Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.393143 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.393195 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.89318271 +0000 UTC m=+154.101504683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.395585 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.397560 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.897550068 +0000 UTC m=+154.105872031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.526211 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.526910 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.026894136 +0000 UTC m=+154.235216099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.628783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.629778 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.129733618 +0000 UTC m=+154.338055591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.730240 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.730458 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.230430153 +0000 UTC m=+154.438752126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.730554 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.730942 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.230924539 +0000 UTC m=+154.439246512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.840772 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.840959 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.340934168 +0000 UTC m=+154.549256141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.841114 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.841447 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.341435364 +0000 UTC m=+154.549757337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.986189 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.986976 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.486958882 +0000 UTC m=+154.695280855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.087768 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.088207 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.588185544 +0000 UTC m=+154.796507517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.190617 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.190785 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.690747658 +0000 UTC m=+154.899069641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.191197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.191551 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.691540863 +0000 UTC m=+154.899862836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.222365 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.262693 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:06 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:06 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:06 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.262751 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.295390 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.295871 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.795851681 +0000 UTC m=+155.004173654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.322180 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" event={"ID":"e3e30f02-3956-427a-a1f3-6e1d51f242d6","Type":"ContainerStarted","Data":"7475b0a180909d8e7e2578a99d6ac8c3f674e276d1589c0305e9d5b357a14cd4"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.322239 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" event={"ID":"e3e30f02-3956-427a-a1f3-6e1d51f242d6","Type":"ContainerStarted","Data":"b91537ee475d833a1b40b9c66e75b163eebb41b5cddd9fca919949159ee9b071"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.331105 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" event={"ID":"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1","Type":"ContainerStarted","Data":"c21910ab12f8c87cbb3174064be9a0a13864273587a192c1502a956337a668b2"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.331143 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" event={"ID":"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1","Type":"ContainerStarted","Data":"c544212299330dddbc7a70f8c9e56dbce0bb5f2b4da38586f71d3872e1b9b26a"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.336554 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b1a90f8b40736ec87f3ca1352b03efea881688a553f26527d8ed8c7258d2cac0"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.339146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkxct" event={"ID":"38cb64e1-bd23-43eb-9eae-7c05f040640b","Type":"ContainerStarted","Data":"9f3a4019b84a995abbc5b8d13b8adbbe9d6934baf1034e588efc3380695c2846"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.342399 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nxchh" podStartSLOduration=132.342380009 podStartE2EDuration="2m12.342380009s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:05.440650647 +0000 UTC m=+153.648972630" watchObservedRunningTime="2026-01-20 11:07:06.342380009 +0000 UTC m=+154.550701982" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.344713 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" podStartSLOduration=133.344701962 podStartE2EDuration="2m13.344701962s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:06.343098522 +0000 UTC m=+154.551420495" watchObservedRunningTime="2026-01-20 11:07:06.344701962 +0000 UTC m=+154.553023925" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.382200 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kkxct" podStartSLOduration=7.382176744 podStartE2EDuration="7.382176744s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:06.363456113 +0000 UTC m=+154.571778086" watchObservedRunningTime="2026-01-20 11:07:06.382176744 +0000 UTC m=+154.590498717" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.397945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.398284 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.898269741 +0000 UTC m=+155.106591714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.499661 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.501395 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.001376672 +0000 UTC m=+155.209698655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.585333 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.585627 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.601253 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.601602 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.101590751 +0000 UTC m=+155.309912724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.616919 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" podStartSLOduration=133.616896194 podStartE2EDuration="2m13.616896194s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:06.403601009 +0000 UTC m=+154.611922992" watchObservedRunningTime="2026-01-20 11:07:06.616896194 +0000 UTC m=+154.825218167" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.619795 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5fj5p"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.657752 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-twkw7"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.663760 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv"] Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.697355 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2efafa7a_ca64_4166_a72b_9b70b86953ad.slice/crio-5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e WatchSource:0}: Error finding container 5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e: Status 404 returned error can't find the container with id 5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.705808 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.706450 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.206434267 +0000 UTC m=+155.414756240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.710198 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.714140 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw"] Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.726471 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb63abc7_f429_46c5_aa23_259063c394d0.slice/crio-0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4 WatchSource:0}: Error finding container 0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4: Status 404 returned error can't find the container with id 0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4 Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.809288 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.809630 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.309618371 +0000 UTC m=+155.517940344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.837324 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.841742 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.844855 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.851655 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.853315 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhz9f"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.866943 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-75nfb"] Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.880125 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d4d608_4f73_4365_a535_71e712884eb9.slice/crio-345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c WatchSource:0}: Error finding container 345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c: Status 404 returned error can't find the container with id 345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.882360 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.901020 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.910007 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.910168 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.410154051 +0000 UTC m=+155.618476014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.910635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.911023 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.410997977 +0000 UTC m=+155.619319950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.918494 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8859d17_62ea_47b3_ac63_537e69ec9f90.slice/crio-197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b WatchSource:0}: Error finding container 197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b: Status 404 returned error can't find the container with id 197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.020725 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.021264 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.521243593 +0000 UTC m=+155.729565566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.021369 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.021670 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.521660826 +0000 UTC m=+155.729982799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.140583 4725 csr.go:261] certificate signing request csr-nhl29 is approved, waiting to be issued Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.141006 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.141419 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.641404333 +0000 UTC m=+155.849726306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.149030 4725 csr.go:257] certificate signing request csr-nhl29 is issued Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.168105 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8"] Jan 20 11:07:07 crc kubenswrapper[4725]: W0120 11:07:07.190347 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f51665c_048e_4625_846b_872a367664e5.slice/crio-35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e WatchSource:0}: Error finding container 35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e: Status 404 returned error can't find the container with id 35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.243016 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.243390 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.743379417 +0000 UTC m=+155.951701390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.248376 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:07 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:07 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:07 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.248435 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.316954 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.336057 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4s7gv"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.342560 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psvt7"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.344633 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.344920 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.844902889 +0000 UTC m=+156.053224862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.349641 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.349936 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" event={"ID":"3f51665c-048e-4625-846b-872a367664e5","Type":"ContainerStarted","Data":"35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.351874 4725 generic.go:334] "Generic (PLEG): container finished" podID="cb0c9cf6-4966-4bd0-8933-823bc00e103c" containerID="2e68bb122f901422bd534d65f561bdbcb16452da8ef99a08675bb394f96b3e43" exitCode=0 Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.351918 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerDied","Data":"2e68bb122f901422bd534d65f561bdbcb16452da8ef99a08675bb394f96b3e43"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.351933 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerStarted","Data":"2379ced5ce84665a02e86767ee98a7419d2fff445562a07319f9b750453c3096"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.356883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" event={"ID":"fb63abc7-f429-46c5-aa23-259063c394d0","Type":"ContainerStarted","Data":"9f7564fd9545e487eed6bf5f4a45ad8d471c4d9f83c4d5be7e9e772823435ecb"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.356913 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" event={"ID":"fb63abc7-f429-46c5-aa23-259063c394d0","Type":"ContainerStarted","Data":"0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.362611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" event={"ID":"2efafa7a-ca64-4166-a72b-9b70b86953ad","Type":"ContainerStarted","Data":"f30daa4ded68e22c78405d7c86aeaa709a9dcee3fc1fa0251486cab412425528"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.362648 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" event={"ID":"2efafa7a-ca64-4166-a72b-9b70b86953ad","Type":"ContainerStarted","Data":"5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.363779 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.376047 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g28q4"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.377769 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.382541 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" event={"ID":"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a","Type":"ContainerStarted","Data":"e7349390a4c35e0628a38b9d3d64db341215b6a1e71ad8e3c1a4e13f7b5153c5"} Jan 20 11:07:07 crc kubenswrapper[4725]: W0120 11:07:07.382617 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb4612ff_dcf7_4e19_af27_fb8b3b54ce39.slice/crio-3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094 WatchSource:0}: Error finding container 3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094: Status 404 returned error can't find the container with id 3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094 Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.418959 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vc6c2"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.424258 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x85nm"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.429451 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.438881 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5fgr9"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.444042 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" event={"ID":"a8d4d608-4f73-4365-a535-71e712884eb9","Type":"ContainerStarted","Data":"22642b0267703d0f0d4a746a0f03d271c2df67abeada67795f526bdde0045fd5"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.444116 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" event={"ID":"a8d4d608-4f73-4365-a535-71e712884eb9","Type":"ContainerStarted","Data":"345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.445986 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.476832 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.976804708 +0000 UTC m=+156.185126691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.482209 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.488360 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.493570 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.493611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-75nfb" event={"ID":"b8859d17-62ea-47b3-ac63-537e69ec9f90","Type":"ContainerStarted","Data":"c331766254a52bfce5ebfa9fcd1396c4a0f89ca82a69986a6b164641bcc92065"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.493633 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-75nfb" event={"ID":"b8859d17-62ea-47b3-ac63-537e69ec9f90","Type":"ContainerStarted","Data":"197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.502697 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.514544 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hmdd"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.522184 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerStarted","Data":"e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.522239 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerStarted","Data":"65a351e547318d4029df04eb1e821ccf32f46b5e2d9c44ec151c7be7e639c1ca"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.527481 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vt8w"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.530159 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.531619 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" event={"ID":"cf2d94b1-aa78-4a9d-8e32-232f92ec8988","Type":"ContainerStarted","Data":"d8bc74a2607b75eee22bf56295877481d8bdd99f60328b27f4fb6dc61d8b7716"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.531648 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" event={"ID":"cf2d94b1-aa78-4a9d-8e32-232f92ec8988","Type":"ContainerStarted","Data":"6d418525dbf979420269910e5a85f8365d7d1f3df290bb8a38ef200cbacfa9bf"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.534929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" event={"ID":"808fb947-228d-42c4-ba11-480348f80d8a","Type":"ContainerStarted","Data":"dba556c6bd771c1ed947e4e8bf41bbc3e5cf61149514ef85e454d5501a39fe07"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.534970 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" event={"ID":"808fb947-228d-42c4-ba11-480348f80d8a","Type":"ContainerStarted","Data":"17687c455efaeae5551e6c06d6262cc353e5526789a594cbcfef191cf08090c4"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.537328 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" podStartSLOduration=133.537316185 podStartE2EDuration="2m13.537316185s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.43946308 +0000 UTC m=+155.647785053" watchObservedRunningTime="2026-01-20 11:07:07.537316185 +0000 UTC m=+155.745638158" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.548888 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" event={"ID":"7f131da2-d815-48eb-b2ab-7f6df6a4039a","Type":"ContainerStarted","Data":"8d5580409dec8f34b75a3bbe4c60893b5001b4b4a1c9b037046003e5f75a7326"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.548941 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" event={"ID":"7f131da2-d815-48eb-b2ab-7f6df6a4039a","Type":"ContainerStarted","Data":"baf54609128ace7d70acf0d367555b43e502c76e9fc46dd37480fda5ebc664d4"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.550027 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.550359 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.552089 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" podStartSLOduration=132.55206562 podStartE2EDuration="2m12.55206562s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.488290569 +0000 UTC m=+155.696612542" watchObservedRunningTime="2026-01-20 11:07:07.55206562 +0000 UTC m=+155.760387593" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.554185 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-75nfb" podStartSLOduration=133.554175557 podStartE2EDuration="2m13.554175557s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.520272048 +0000 UTC m=+155.728594041" watchObservedRunningTime="2026-01-20 11:07:07.554175557 +0000 UTC m=+155.762497530" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.554389 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.555937 4725 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hqvrw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.555980 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" podUID="7f131da2-d815-48eb-b2ab-7f6df6a4039a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.561159 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" podStartSLOduration=133.561138796 podStartE2EDuration="2m13.561138796s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.54161991 +0000 UTC m=+155.749941883" watchObservedRunningTime="2026-01-20 11:07:07.561138796 +0000 UTC m=+155.769460769" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.565443 4725 generic.go:334] "Generic (PLEG): container finished" podID="1f8986ee-ae07-4ffe-89f2-c73eca4d3465" containerID="2fbae3e4c5ba192e1288227633bbd0bea8731f438425a2f82de85dd88045865a" exitCode=0 Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.565531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" event={"ID":"1f8986ee-ae07-4ffe-89f2-c73eca4d3465","Type":"ContainerDied","Data":"2fbae3e4c5ba192e1288227633bbd0bea8731f438425a2f82de85dd88045865a"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.565568 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" event={"ID":"1f8986ee-ae07-4ffe-89f2-c73eca4d3465","Type":"ContainerStarted","Data":"1cca0ecc8497adce399210ce48e93b9b21075eac4a44aaa49fea4cb5f7e3ee8a"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.578297 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2fbde62b9831aa7525ab1a824d6d69162a40c06bf17f3f1ed6515ab9b7d33004"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.578339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"77cf07c0bba9ef8e38147bb27882322b3a3d47058b152d80dea6d0f4917ab4c6"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.586881 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" podStartSLOduration=132.586860497 podStartE2EDuration="2m12.586860497s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.563794849 +0000 UTC m=+155.772116842" watchObservedRunningTime="2026-01-20 11:07:07.586860497 +0000 UTC m=+155.795182470" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.586984 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.588276 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.088255601 +0000 UTC m=+156.296577574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.591663 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7j2sn"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.598210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" event={"ID":"ac3b56d0-256f-40f8-b2ff-2271f82ff750","Type":"ContainerStarted","Data":"0f1fa3ac1364ebadff50537496435edbd43621a9de38871245a6371017182864"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.598264 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" event={"ID":"ac3b56d0-256f-40f8-b2ff-2271f82ff750","Type":"ContainerStarted","Data":"ef1c9a7ed1b9223c25f0c6ab857ca3e2041759c003968197fa76daf44b08d243"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.616712 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.628032 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.672402 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" podStartSLOduration=133.672378324 podStartE2EDuration="2m13.672378324s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.669649968 +0000 UTC m=+155.877971951" watchObservedRunningTime="2026-01-20 11:07:07.672378324 +0000 UTC m=+155.880700297" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.674470 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.777630 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.781998 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.28198318 +0000 UTC m=+156.490305153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.881942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.882862 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.382812319 +0000 UTC m=+156.591134292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.983313 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.983894 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.483878315 +0000 UTC m=+156.692200288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.051155 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.085355 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.085692 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.585677915 +0000 UTC m=+156.793999878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.150903 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-20 11:02:07 +0000 UTC, rotation deadline is 2026-11-26 21:51:11.824184064 +0000 UTC Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.150961 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7450h44m3.6732248s for next certificate rotation Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.186653 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.186989 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.686976739 +0000 UTC m=+156.895298712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.226264 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:08 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:08 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:08 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.226313 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.292158 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.292272 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.792249708 +0000 UTC m=+157.000571681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.292861 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.293379 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.793368594 +0000 UTC m=+157.001690567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.394269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.394607 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.894578065 +0000 UTC m=+157.102900038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.394840 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.395192 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.895180164 +0000 UTC m=+157.103502137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.503611 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.503831 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.003799009 +0000 UTC m=+157.212120982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.611555 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.611846 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.111835285 +0000 UTC m=+157.320157258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.697592 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" event={"ID":"3f51665c-048e-4625-846b-872a367664e5","Type":"ContainerStarted","Data":"6c9aae534dfcaf85a01bb59882019a09dd63f3cdb8ff8a81eadda6f1b30d5c0a"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.700509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" event={"ID":"c3dff36b-2e27-4c6b-bee4-19cd58833ea7","Type":"ContainerStarted","Data":"bfb6b9f87d7807de82f889139680e6cafe692c66fe25ca54d534263dd2f4f22e"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.707473 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerStarted","Data":"3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.712327 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.712632 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.212617372 +0000 UTC m=+157.420939345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.787796 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" event={"ID":"a8d4d608-4f73-4365-a535-71e712884eb9","Type":"ContainerStarted","Data":"2f44ccaad1054e141bccb2fc2d00e1ca136ba341c2c3e5f6648bb3ca9d7659fd"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.791235 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" event={"ID":"396ed454-f2c7-483a-8aad-0953041099b5","Type":"ContainerStarted","Data":"7c8e9b4a6d96cf3000ea2cea8585188b82d89e8eeb465223f79e43e793a0e860"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.797139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" event={"ID":"f27b4eea-081e-421a-83e9-8a5266163c53","Type":"ContainerStarted","Data":"977259fa46250cfa3faaed91e90d3a012f9520c8708543df9be4c3821af4a14b"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.815474 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.817196 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.31718484 +0000 UTC m=+157.525506813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.841246 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" event={"ID":"222f710d-f6a2-48e7-9175-55b50f3aba30","Type":"ContainerStarted","Data":"da8ad133548044f221e0607f52878ae85abe37a7302d30eb560a3905b5f05d4b"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.844796 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"146e2a40139c8580a82a96198237e6caf20d339116832d1224d6065c5d51bf27"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.846006 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerStarted","Data":"0b78375c7ed8f9916a58dd59c26f3043217b694c6d335a958edaddd11c21782a"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.846957 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"7824144afcc8a399d8ad02f47566e1f9f7e8fccfd9082edf2a275537cfa7c907"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.856560 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.857934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" event={"ID":"eca1f8da-59f2-404e-a5e0-dbe1a191b885","Type":"ContainerStarted","Data":"24b3efbd35deaf29f8ae99f73d94b4a37207439f887544f47e5d619803f53177"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.859058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4s7gv" event={"ID":"6023e844-87d6-4f4d-bf86-a685b937cda5","Type":"ContainerStarted","Data":"23efdca88d24391c79cbdc8101644526dfe796074cbe106632842089a3aea5ff"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.859100 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4s7gv" event={"ID":"6023e844-87d6-4f4d-bf86-a685b937cda5","Type":"ContainerStarted","Data":"bec68e0cbb9ef68aa43cb22135599ea459c3058d3751e304838b8ee5856a5298"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.862286 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" event={"ID":"b07c5d50-bb91-412d-b86a-3d736a16a81d","Type":"ContainerStarted","Data":"17f031bacd1eda1c2ba5121f6412c48147956df7560c565dbac566f72b8d91d9"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.863714 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerStarted","Data":"8c9ae23bdbd75e8f49ad08210ad1b5884a445b42c455c9175cf22d7caa19bfef"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.864986 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" event={"ID":"4df8c05f-b523-439b-908b-c4f34b22b7e9","Type":"ContainerStarted","Data":"50a462970ad6d65adb263c111a72af15f6635fc334edd6ec6c733371acac627f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.866575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" event={"ID":"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a","Type":"ContainerStarted","Data":"0ad6a5b17a1ff2606b662eaa2a0e8d9edadea69bba3e967d770049369283aec3"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.866603 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" event={"ID":"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a","Type":"ContainerStarted","Data":"8667f6a3ff3b7a1116ad8912ad410ee6fb4a8a3c9575abcccafa5a4aba6df766"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.867896 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" event={"ID":"9d0ff97b-8da9-4156-a78b-9ebd6886313f","Type":"ContainerStarted","Data":"d3d2b9ac9980bdb6cc0f6489ef75a6ab145564c82979a42ac7db2801b2c88e21"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.869175 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" event={"ID":"cf2d94b1-aa78-4a9d-8e32-232f92ec8988","Type":"ContainerStarted","Data":"fbd9c7453ed542b308e811a5a43b148b28154367b997c7b9389bd85162bc19b8"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.870461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" event={"ID":"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b","Type":"ContainerStarted","Data":"9e1ef4fbb89013bf638486d8be02122f0bc36ac09c8b6e368cea4cb9dc8d23eb"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.871192 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerStarted","Data":"f39928c8d7256975b95a8abe066b49247f38d754512e9fe57502d4feea0d8501"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.872168 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" event={"ID":"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb","Type":"ContainerStarted","Data":"0ae322f03e68ee5dfe43a307a875b1e4f6979860e0505612a2338182052c17a2"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.873353 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerStarted","Data":"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.873379 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerStarted","Data":"ade77836dcd269f9c5de0b97ad651f7a735e267f67b9c6aa9acfc5f72e48f82f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.874255 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.876089 4725 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lwhzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.876127 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.876411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" event={"ID":"876f0761-c4c3-42f7-81f8-9a26071a7676","Type":"ContainerStarted","Data":"d5639b34e08781dea22f4cadbcc373a0ec2674e0868509a628145723a268aa0f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.878244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x85nm" event={"ID":"db710f25-e573-414c-9129-0dfa945d0b71","Type":"ContainerStarted","Data":"cf7ba3a3a7274ff7821b5279e40ba6e2bd9919ddb8fe93c0e131e2e112f0358e"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.880420 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" event={"ID":"8428545d-e40d-4259-b579-ce7bff401888","Type":"ContainerStarted","Data":"1b572bd552d0092cfbb3df230d8d034e2e5ab55b33aa4f3b57fee11c4f64e6e4"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.884146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" event={"ID":"29ff5711-1e81-4ed0-8acd-6124100de37d","Type":"ContainerStarted","Data":"19358ebd603e8195e69c6c2b23e06e1e71a1829b126088ccbc3ad70199c568ac"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.886266 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" event={"ID":"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59","Type":"ContainerStarted","Data":"7da506d1dfa708183b544bfc4756606b68f7e40ea8c138fead633a74346c076f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.889812 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" event={"ID":"e1eba244-7c59-4933-ad4c-5dccc8fdc854","Type":"ContainerStarted","Data":"9e42315c152eaa8fbaf3a0fc31f4242fe3c5828fd4c64a1a0d048a412c00207b"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.894013 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" event={"ID":"808fb947-228d-42c4-ba11-480348f80d8a","Type":"ContainerStarted","Data":"f05222988264b316f6dffb71d4eb7816c4979708a52ff1a83b0e27db6b9aeb83"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.901910 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.916443 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.917925 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.417911316 +0000 UTC m=+157.626233289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.017876 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" podStartSLOduration=134.017855537 podStartE2EDuration="2m14.017855537s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.016233946 +0000 UTC m=+157.224555919" watchObservedRunningTime="2026-01-20 11:07:09.017855537 +0000 UTC m=+157.226177510" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.018723 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.018978 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.518968152 +0000 UTC m=+157.727290125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.147327 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" podStartSLOduration=135.147309579 podStartE2EDuration="2m15.147309579s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.127818514 +0000 UTC m=+157.336140497" watchObservedRunningTime="2026-01-20 11:07:09.147309579 +0000 UTC m=+157.355631552" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.149063 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" podStartSLOduration=134.149056963 podStartE2EDuration="2m14.149056963s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.146360628 +0000 UTC m=+157.354682601" watchObservedRunningTime="2026-01-20 11:07:09.149056963 +0000 UTC m=+157.357378936" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.192333 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.192978 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.692954968 +0000 UTC m=+157.901276941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.216639 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4s7gv" podStartSLOduration=10.216622354 podStartE2EDuration="10.216622354s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.199979509 +0000 UTC m=+157.408301482" watchObservedRunningTime="2026-01-20 11:07:09.216622354 +0000 UTC m=+157.424944327" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.217059 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" podStartSLOduration=135.217050078 podStartE2EDuration="2m15.217050078s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.214522468 +0000 UTC m=+157.422844441" watchObservedRunningTime="2026-01-20 11:07:09.217050078 +0000 UTC m=+157.425372051" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.241362 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" podStartSLOduration=134.241345673 podStartE2EDuration="2m14.241345673s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.23869486 +0000 UTC m=+157.447016833" watchObservedRunningTime="2026-01-20 11:07:09.241345673 +0000 UTC m=+157.449667646" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.247347 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:09 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:09 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:09 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.247410 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.266460 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podStartSLOduration=133.266435864 podStartE2EDuration="2m13.266435864s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.264031599 +0000 UTC m=+157.472353572" watchObservedRunningTime="2026-01-20 11:07:09.266435864 +0000 UTC m=+157.474757837" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.294154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.294599 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.794580592 +0000 UTC m=+158.002902565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.395420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.395615 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.895583767 +0000 UTC m=+158.103905740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.395747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.396125 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.896110583 +0000 UTC m=+158.104432556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.496434 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.497690 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.997669666 +0000 UTC m=+158.205991639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.598606 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.599127 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.099111953 +0000 UTC m=+158.307433926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.699282 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.699630 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.199610083 +0000 UTC m=+158.407932046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.801048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.801458 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.301445153 +0000 UTC m=+158.509767126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.902323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.902625 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.402607343 +0000 UTC m=+158.610929326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.916150 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"2a535dc5fe3813256d22334a0b77b08466b5880b0812562973bec061393a4d38"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.917646 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" event={"ID":"4df8c05f-b523-439b-908b-c4f34b22b7e9","Type":"ContainerStarted","Data":"af77a96bd9ba35fb3ac538e2761fa92acee4c18ee7e63ab0916e014c047aa256"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.921530 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.922215 4725 generic.go:334] "Generic (PLEG): container finished" podID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerID="e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0" exitCode=0 Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.922602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerDied","Data":"e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.922824 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.924578 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" event={"ID":"876f0761-c4c3-42f7-81f8-9a26071a7676","Type":"ContainerStarted","Data":"6ded86d27c51be355b6b1ed8bb3015d47742d646e66fff0dabb047d8e4d55497"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.925450 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.926940 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x85nm" event={"ID":"db710f25-e573-414c-9129-0dfa945d0b71","Type":"ContainerStarted","Data":"acc0e8d380386dccbb93d86c4c17c9015b976635b8e2fb08ed60728195d4e9f6"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.928931 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" event={"ID":"396ed454-f2c7-483a-8aad-0953041099b5","Type":"ContainerStarted","Data":"b790973920d56a9ce46f8c4b3b7e161ff94a2029028c99f437a3f218f74faa88"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.930373 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" event={"ID":"b07c5d50-bb91-412d-b86a-3d736a16a81d","Type":"ContainerStarted","Data":"a2ed9a7c94d3a76a3edf66f57d30c78a23fa246c399b614c543f24b1735b8ce9"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.932167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerStarted","Data":"4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.932962 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.934806 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.934850 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.935952 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerStarted","Data":"9266f669098b4acf7bf846a4f35ee36aeffd9332c7441f6d5f058d68fe3c3fd5"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.938814 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" event={"ID":"29ff5711-1e81-4ed0-8acd-6124100de37d","Type":"ContainerStarted","Data":"b15b917912baf61f61ad944365802ab24535e6598f766f30e676fc72d19ffa4e"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.940859 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" event={"ID":"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b","Type":"ContainerStarted","Data":"195122e431daedb3c4477b730ec22b44608aea9ea19b78430e2442a39e386352"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.941374 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.942644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" event={"ID":"1f8986ee-ae07-4ffe-89f2-c73eca4d3465","Type":"ContainerStarted","Data":"29cf57e4793fd00dc38bb4eef89cfdf01955ea5fc0076a381c6b53926b3ab853"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.943383 4725 patch_prober.go:28] interesting pod/console-operator-58897d9998-vc6c2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.943554 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" podUID="08bc2ba3-3f1f-40df-bf3d-1d5ed634945b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.059757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.060853 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.560812962 +0000 UTC m=+158.769134955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.063448 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" event={"ID":"9d0ff97b-8da9-4156-a78b-9ebd6886313f","Type":"ContainerStarted","Data":"d49029f806fcceece28215f1aecf257c556fa187a2ac2fa27c2e9c6b0548f7bc"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.072485 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.077577 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" event={"ID":"f27b4eea-081e-421a-83e9-8a5266163c53","Type":"ContainerStarted","Data":"eb550e6132dffe9232a2199f66417b13c2f3e0934104253d5a9a59db899c9260"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.096878 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.098291 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.101615 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" event={"ID":"3f51665c-048e-4625-846b-872a367664e5","Type":"ContainerStarted","Data":"8a28d3e5f9a6753eb2b00804bf186cdc01ed67f24f6ffcf21e59f0762b62548b"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.103233 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.106000 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.128525 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.133099 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerStarted","Data":"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.133782 4725 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lwhzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.133814 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.135133 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.138508 4725 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5qmp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.138547 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.140124 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" podStartSLOduration=136.140112081 podStartE2EDuration="2m16.140112081s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.135989982 +0000 UTC m=+158.344311965" watchObservedRunningTime="2026-01-20 11:07:10.140112081 +0000 UTC m=+158.348434054" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.211216 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.211677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.211758 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.212005 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.711983858 +0000 UTC m=+158.920305831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.213651 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.310978 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.316898 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.816878575 +0000 UTC m=+159.025200548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.348028 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.350021 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.357734 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" podStartSLOduration=134.357715513 podStartE2EDuration="2m14.357715513s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.34464023 +0000 UTC m=+158.552962203" watchObservedRunningTime="2026-01-20 11:07:10.357715513 +0000 UTC m=+158.566037486" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.362568 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:10 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:10 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:10 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.362936 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.367247 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.411821 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412179 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412314 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412372 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412626 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.413521 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.913503222 +0000 UTC m=+159.121825185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.415341 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.416923 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.436781 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" podStartSLOduration=134.436763685 podStartE2EDuration="2m14.436763685s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.392909583 +0000 UTC m=+158.601231546" watchObservedRunningTime="2026-01-20 11:07:10.436763685 +0000 UTC m=+158.645085658" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.437659 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podStartSLOduration=135.437653663 podStartE2EDuration="2m15.437653663s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.434986529 +0000 UTC m=+158.643308502" watchObservedRunningTime="2026-01-20 11:07:10.437653663 +0000 UTC m=+158.645975636" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.456291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.503477 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" podStartSLOduration=137.503455518 podStartE2EDuration="2m17.503455518s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.481459874 +0000 UTC m=+158.689781857" watchObservedRunningTime="2026-01-20 11:07:10.503455518 +0000 UTC m=+158.711777491" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.505732 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.507035 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514381 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514420 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514485 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514569 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.515398 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.015384874 +0000 UTC m=+159.223706847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.515866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.516183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.527479 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.553334 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" podStartSLOduration=134.55331012 podStartE2EDuration="2m14.55331012s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.544188972 +0000 UTC m=+158.752510975" watchObservedRunningTime="2026-01-20 11:07:10.55331012 +0000 UTC m=+158.761632093" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.581138 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.582619 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" podStartSLOduration=137.582607644 podStartE2EDuration="2m17.582607644s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.580352563 +0000 UTC m=+158.788674546" watchObservedRunningTime="2026-01-20 11:07:10.582607644 +0000 UTC m=+158.790929617" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620419 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.620566 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.12053793 +0000 UTC m=+159.328859903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620752 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620814 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621393 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621444 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621481 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621501 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.621829 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.12181649 +0000 UTC m=+159.330138463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.622111 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.641844 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" podStartSLOduration=136.641825621 podStartE2EDuration="2m16.641825621s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.619817957 +0000 UTC m=+158.828139930" watchObservedRunningTime="2026-01-20 11:07:10.641825621 +0000 UTC m=+158.850147594" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.647016 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.649606 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" podStartSLOduration=135.649594756 podStartE2EDuration="2m15.649594756s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.645738185 +0000 UTC m=+158.854060178" watchObservedRunningTime="2026-01-20 11:07:10.649594756 +0000 UTC m=+158.857916739" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724613 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724839 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724893 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724936 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.725330 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.225317523 +0000 UTC m=+159.433639496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.725665 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.725867 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.726792 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" podStartSLOduration=136.72677466 podStartE2EDuration="2m16.72677466s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.670525006 +0000 UTC m=+158.878846969" watchObservedRunningTime="2026-01-20 11:07:10.72677466 +0000 UTC m=+158.935096633" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.729940 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.731867 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.744748 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.760186 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.826196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.853428 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.855057 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.355032623 +0000 UTC m=+159.563354596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.928514 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.928939 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.428923683 +0000 UTC m=+159.637245656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.030714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.131432 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.631408387 +0000 UTC m=+159.839730360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.131668 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.132000 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.631989556 +0000 UTC m=+159.840311529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.255938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.256359 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.756347147 +0000 UTC m=+159.964669110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.280815 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:11 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:11 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:11 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.280907 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.356910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.358482 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.858463906 +0000 UTC m=+160.066785879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.473892 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.477200 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.97718063 +0000 UTC m=+160.185502603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.588638 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.589151 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.08912955 +0000 UTC m=+160.297451523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.694870 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.695673 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.195659939 +0000 UTC m=+160.403981912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.712855 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" event={"ID":"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb","Type":"ContainerStarted","Data":"5fcb5a5f6fd4a62a750e6611ee8b9381e62e99e69b8987cfd51959ab622d7a52"} Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.800188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.800708 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.30068902 +0000 UTC m=+160.509010993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.828127 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" event={"ID":"9d0ff97b-8da9-4156-a78b-9ebd6886313f","Type":"ContainerStarted","Data":"84ad582d13d994d4f0f306690526c0413b3d214c61ca4bc111ea0ba825199abb"} Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.840551 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" event={"ID":"222f710d-f6a2-48e7-9175-55b50f3aba30","Type":"ContainerStarted","Data":"61a7ab4966091d4f69d91db412773e4eeb151873ba5ebf492021b19b86bc66dc"} Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.892129 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" podStartSLOduration=137.892107572 podStartE2EDuration="2m17.892107572s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:11.8885201 +0000 UTC m=+160.096842073" watchObservedRunningTime="2026-01-20 11:07:11.892107572 +0000 UTC m=+160.100429565" Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.893799 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.908234 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.908618 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.408604373 +0000 UTC m=+160.616926346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.092983 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" event={"ID":"eca1f8da-59f2-404e-a5e0-dbe1a191b885","Type":"ContainerStarted","Data":"48fa444241a2b7476ccea56c69f3435aa4b6f39132a3a0083775d0abf0a56a37"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.093501 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.093873 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.593851183 +0000 UTC m=+160.802173156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.120163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerStarted","Data":"6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.120241 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.130867 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.132132 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.134180 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lhx4z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.134223 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.138642 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" podStartSLOduration=137.138612144 podStartE2EDuration="2m17.138612144s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.126754771 +0000 UTC m=+160.335076754" watchObservedRunningTime="2026-01-20 11:07:12.138612144 +0000 UTC m=+160.346934117" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.140746 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" event={"ID":"c3dff36b-2e27-4c6b-bee4-19cd58833ea7","Type":"ContainerStarted","Data":"6072c7785a960e401b9dbe1aa849d245daee87165d43547c831fd6da21c65c14"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.262493 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.263010 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.762992616 +0000 UTC m=+160.971314589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.263373 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.274416 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" event={"ID":"e1eba244-7c59-4933-ad4c-5dccc8fdc854","Type":"ContainerStarted","Data":"ad73e49a159a3f1e9ce914c33abe4915142f3b24a74a5d1133801772668fbe5f"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.276426 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288334 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d7t4z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288401 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podUID="e1eba244-7c59-4933-ad4c-5dccc8fdc854" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288908 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:12 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:12 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:12 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288961 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.292617 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" event={"ID":"4df8c05f-b523-439b-908b-c4f34b22b7e9","Type":"ContainerStarted","Data":"2ba53f9e92e7a0a3ccbcd8596513e2e9bab5a869b9fa262deb6dfb896e7387bb"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.308865 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.334843 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x85nm" podStartSLOduration=13.334822691 podStartE2EDuration="13.334822691s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.333218661 +0000 UTC m=+160.541540634" watchObservedRunningTime="2026-01-20 11:07:12.334822691 +0000 UTC m=+160.543144664" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.351135 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" event={"ID":"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59","Type":"ContainerStarted","Data":"8c1fce15f7bb048ecca7c9ccb244baea7c59777b5805576f1d0e641d8c3d55a6"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.378294 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.389336 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.390558 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.390853 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.390894 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.391128 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.392259 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.892236102 +0000 UTC m=+161.100558075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.398540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" event={"ID":"8428545d-e40d-4259-b579-ce7bff401888","Type":"ContainerStarted","Data":"36f5f86f95b4bde71664c879d9ab5f8775595d7b1d17e21294d00517a7a63568"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.398571 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.412853 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.412887 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.444208 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549724 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.553608 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.053575448 +0000 UTC m=+161.261897421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.554034 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.554422 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.610998 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.611069 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.615881 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podStartSLOduration=136.615855872 podStartE2EDuration="2m16.615855872s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.610720921 +0000 UTC m=+160.819042904" watchObservedRunningTime="2026-01-20 11:07:12.615855872 +0000 UTC m=+160.824177845" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.616774 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podStartSLOduration=139.616767411 podStartE2EDuration="2m19.616767411s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.382145083 +0000 UTC m=+160.590467066" watchObservedRunningTime="2026-01-20 11:07:12.616767411 +0000 UTC m=+160.825089374" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.622760 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.623597 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.634159 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.635427 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.651507 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.652919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.655520 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.155492392 +0000 UTC m=+161.363814375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.660101 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.678522 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" podStartSLOduration=137.678502638 podStartE2EDuration="2m17.678502638s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.662289896 +0000 UTC m=+160.870611869" watchObservedRunningTime="2026-01-20 11:07:12.678502638 +0000 UTC m=+160.886824611" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.680801 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.18078421 +0000 UTC m=+161.389106183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.687413 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.883040 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" podStartSLOduration=137.883013505 podStartE2EDuration="2m17.883013505s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.810149449 +0000 UTC m=+161.018471422" watchObservedRunningTime="2026-01-20 11:07:12.883013505 +0000 UTC m=+161.091335478" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.887711 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.887951 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.887980 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.888004 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.888205 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.388188959 +0000 UTC m=+161.596510932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.888561 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.888584 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.896293 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.976981 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2hmdd" podStartSLOduration=138.976960608 podStartE2EDuration="2m18.976960608s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.900408424 +0000 UTC m=+161.108730397" watchObservedRunningTime="2026-01-20 11:07:12.976960608 +0000 UTC m=+161.185282581" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.990967 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.991014 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.991054 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.991163 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.991543 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.491526728 +0000 UTC m=+161.699848701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.992677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.992912 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.039250 4725 patch_prober.go:28] interesting pod/console-f9d7485db-75nfb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.039309 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-75nfb" podUID="b8859d17-62ea-47b3-ac63-537e69ec9f90" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.046946 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.718397 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.722609 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" podStartSLOduration=138.722591378 podStartE2EDuration="2m18.722591378s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:13.011998472 +0000 UTC m=+161.220320435" watchObservedRunningTime="2026-01-20 11:07:13.722591378 +0000 UTC m=+161.930913351" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.730282 4725 patch_prober.go:28] interesting pod/console-operator-58897d9998-vc6c2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.730376 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" podUID="08bc2ba3-3f1f-40df-bf3d-1d5ed634945b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.741774 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lhx4z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.741833 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.742963 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743343 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743374 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743773 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743818 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764841 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764895 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764954 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764966 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.769566 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:13 crc kubenswrapper[4725]: E0120 11:07:13.770066 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.770032124 +0000 UTC m=+162.978354097 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.949570 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:13 crc kubenswrapper[4725]: E0120 11:07:13.950194 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.450153748 +0000 UTC m=+162.658475721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.107705 4725 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.061s" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108005 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108054 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108120 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108135 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.109582 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.109603 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.110242 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:14 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.110284 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111188 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" podStartSLOduration=139.11116333 podStartE2EDuration="2m19.11116333s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:14.110891651 +0000 UTC m=+162.319213624" watchObservedRunningTime="2026-01-20 11:07:14.11116333 +0000 UTC m=+162.319485303" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111824 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111846 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111944 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.112184 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.112219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.112696 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.612683568 +0000 UTC m=+162.821005531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.129150 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.183979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerStarted","Data":"01a79750127c09ea5c6dc20b661d6675fdb1d12c0c260ea3667e9b8f6125164f"} Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221158 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221503 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221668 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221690 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221795 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.222425 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.223194 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.723176612 +0000 UTC m=+162.931498585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.237315 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:14 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.237371 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.318677 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323808 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323868 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323976 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323997 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.324023 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.324820 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.325131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.325640 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.325845 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.332695 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x85nm" event={"ID":"db710f25-e573-414c-9129-0dfa945d0b71","Type":"ContainerStarted","Data":"1d9e566e86385a42798402a5e088ba782e33fbb9244935e33f3450af02dbca60"} Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.352824 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.852781128 +0000 UTC m=+163.061103111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.374066 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.393274 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" event={"ID":"eca1f8da-59f2-404e-a5e0-dbe1a191b885","Type":"ContainerStarted","Data":"164dfe079177ee6da99408a57439b21e42108b8ebad11255499fbdf5b4386afe"} Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.421313 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" event={"ID":"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb","Type":"ContainerStarted","Data":"f1b4836b5552db8e9659bf24041c0c226e75f53c819ee8e33cb00a2edd304a13"} Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.427498 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.428556 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.428602 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.429705 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.430950 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.930931683 +0000 UTC m=+163.139253656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.444961 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.445004 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.462755 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.510515 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.577658 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.649020 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.652206 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.152189109 +0000 UTC m=+163.360511082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.684353 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.684864 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.184837308 +0000 UTC m=+163.393159281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.733314 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d7t4z container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.733398 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podUID="e1eba244-7c59-4933-ad4c-5dccc8fdc854" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.740027 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d7t4z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.740155 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podUID="e1eba244-7c59-4933-ad4c-5dccc8fdc854" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.765990 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.889760 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.890332 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.390320936 +0000 UTC m=+163.598642909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.996706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.997180 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.497161755 +0000 UTC m=+163.705483728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.098592 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.099034 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.599019397 +0000 UTC m=+163.807341370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.201504 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.201995 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.701978903 +0000 UTC m=+163.910300876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.206830 4725 patch_prober.go:28] interesting pod/apiserver-76f77b778f-twkw7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]log ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]etcd ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/max-in-flight-filter ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 20 11:07:15 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectcache ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-startinformers ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 11:07:15 crc kubenswrapper[4725]: livez check failed Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.207196 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" podUID="cb0c9cf6-4966-4bd0-8933-823bc00e103c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.244794 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:15 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.244844 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.271186 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.303197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.303602 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.803588167 +0000 UTC m=+164.011910140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.352841 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" podStartSLOduration=141.352824629 podStartE2EDuration="2m21.352824629s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:15.351825299 +0000 UTC m=+163.560147272" watchObservedRunningTime="2026-01-20 11:07:15.352824629 +0000 UTC m=+163.561146602" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.372404 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406682 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406766 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406866 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.408342 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.908292668 +0000 UTC m=+164.116614641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.413348 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2d56c6e-b9ad-4de9-8fe6-06b00293050e" (UID: "e2d56c6e-b9ad-4de9-8fe6-06b00293050e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.430767 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl" (OuterVolumeSpecName: "kube-api-access-dw4rl") pod "e2d56c6e-b9ad-4de9-8fe6-06b00293050e" (UID: "e2d56c6e-b9ad-4de9-8fe6-06b00293050e"). InnerVolumeSpecName "kube-api-access-dw4rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.431121 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2d56c6e-b9ad-4de9-8fe6-06b00293050e" (UID: "e2d56c6e-b9ad-4de9-8fe6-06b00293050e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.445267 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lhx4z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.445329 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.458668 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" podStartSLOduration=140.458649146 podStartE2EDuration="2m20.458649146s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:15.457660566 +0000 UTC m=+163.665982539" watchObservedRunningTime="2026-01-20 11:07:15.458649146 +0000 UTC m=+163.666971119" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.475702 4725 generic.go:334] "Generic (PLEG): container finished" podID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerID="892418dd3e77ceab40f34a8a0fd5716151217dc2c55480d979119a50b49216a9" exitCode=0 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.475756 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"892418dd3e77ceab40f34a8a0fd5716151217dc2c55480d979119a50b49216a9"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.475781 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerStarted","Data":"42297e2c5e4314f8ac19bdb872ed1cfccfa8006702130dd94931f10251920fbc"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.477897 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.501487 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.518292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"9caa16c46bc30be6e071b0e834721a3aa7b66b87e46c812829739a0491423617"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519319 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519365 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519375 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519393 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.519618 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:16.019607639 +0000 UTC m=+164.227929612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.542365 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerStarted","Data":"a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.542415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerStarted","Data":"b3c438c94578ed127de08ab71e5b40caf95c66fe2d7a2b37a5e91dfd80db62be"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.579535 4725 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.622697 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.622996 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:16.122981898 +0000 UTC m=+164.331303861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.646499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerStarted","Data":"1a440377416e2e3be97cb4385521f0b527fd44fc3d296005eb3a6215b7798a51"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.648997 4725 generic.go:334] "Generic (PLEG): container finished" podID="247dcae1-930b-476d-abbe-f33c3da0730b" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" exitCode=0 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.649043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.653035 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.659345 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerDied","Data":"65a351e547318d4029df04eb1e821ccf32f46b5e2d9c44ec151c7be7e639c1ca"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.659392 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65a351e547318d4029df04eb1e821ccf32f46b5e2d9c44ec151c7be7e639c1ca" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.724696 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.725049 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:16.225032785 +0000 UTC m=+164.433354758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.758429 4725 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T11:07:15.579558828Z","Handler":null,"Name":""} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.786067 4725 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.786114 4725 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.827883 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.828630 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: W0120 11:07:15.874249 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7865a54a_be9b_4a0a_8c84_b45c8bfe40e6.slice/crio-c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60 WatchSource:0}: Error finding container c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60: Status 404 returned error can't find the container with id c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.889589 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.995190 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.062098 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.062161 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.144799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.151516 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.270606 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:16 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:16 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:16 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.270883 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.325767 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.380036 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.830160 4725 generic.go:334] "Generic (PLEG): container finished" podID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerID="a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe" exitCode=0 Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.830395 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.848163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerStarted","Data":"fbfff8e8818beecfb8c02cfbcbeb21c81754f2aeda1e021b3b81559a276b8a66"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.852663 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerStarted","Data":"c8cf137c59938a71804fd93575de29dac65e3fbdae7d9616af8e1e0e425812c7"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.925882 4725 generic.go:334] "Generic (PLEG): container finished" podID="1ba77d4b-0178-4730-8869-389efdf58851" containerID="38beb6d6731fbc36ccb21ece2faf5cceb4d8191e98451bfd04d8127368937300" exitCode=0 Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.926161 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"38beb6d6731fbc36ccb21ece2faf5cceb4d8191e98451bfd04d8127368937300"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.972481 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.973034 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerStarted","Data":"c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.976410 4725 generic.go:334] "Generic (PLEG): container finished" podID="39d02691-2128-45e8-841b-5bbf79e0a116" containerID="bef010ae40f12ebf94868b1a7f63b8c8ce98852cd1c4ccb364c0b676606ca709" exitCode=0 Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.976543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"bef010ae40f12ebf94868b1a7f63b8c8ce98852cd1c4ccb364c0b676606ca709"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.976578 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerStarted","Data":"947644fa4cdb3ece3385cefa57c8a4ab47c9b07453257db4d816fb94806bf10c"} Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.086729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"c9730c818f2fe24e35cb8693b04250657f12a4654e60ab7b891225f0df5cbb35"} Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.259957 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:17 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:17 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:17 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.260318 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.740671 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.936613 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.942331 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.173056 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerStarted","Data":"ed7560860908ee6c4f83f3490cbdd1843d5adf7ac8051897ed017552b83ca2ee"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.214611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"f469d9e066b529cf53b0a7c8792a55c1826f2aa074b17a33ebb83670eceeed8e"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.225055 4725 generic.go:334] "Generic (PLEG): container finished" podID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerID="79b3dc2509427f8e48ea65515f6bd240f048253490613646e6daeff65ff41302" exitCode=0 Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.225478 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"79b3dc2509427f8e48ea65515f6bd240f048253490613646e6daeff65ff41302"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.283224 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:18 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:18 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:18 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.283291 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.303322 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerID="06596abc1be5a61b774b86675bea7d758f393f271eafec99aee9e0618b84133b" exitCode=0 Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.303392 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"06596abc1be5a61b774b86675bea7d758f393f271eafec99aee9e0618b84133b"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.423332 4725 generic.go:334] "Generic (PLEG): container finished" podID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerID="9f5ff65ac43718d6c6a2cb0ff08d34aa44b3c5b853c8111fc5672b5c544f3567" exitCode=0 Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.424221 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"9f5ff65ac43718d6c6a2cb0ff08d34aa44b3c5b853c8111fc5672b5c544f3567"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.432959 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" podStartSLOduration=19.432941977 podStartE2EDuration="19.432941977s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:18.347500933 +0000 UTC m=+166.555822906" watchObservedRunningTime="2026-01-20 11:07:18.432941977 +0000 UTC m=+166.641263950" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.694775 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.704774 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.089935 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230070 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:19 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:19 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:19 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230184 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230184 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:19 crc kubenswrapper[4725]: E0120 11:07:19.230450 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerName="collect-profiles" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230464 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerName="collect-profiles" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.232363 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerName="collect-profiles" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.232858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.251530 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.251743 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.282954 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.417184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.417520 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.518419 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.518591 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.519188 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:19.941359 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerStarted","Data":"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b"} Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:19.941438 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.025131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.066361 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" podStartSLOduration=146.066336779 podStartE2EDuration="2m26.066336779s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:20.065317946 +0000 UTC m=+168.273639939" watchObservedRunningTime="2026-01-20 11:07:20.066336779 +0000 UTC m=+168.274658752" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.225612 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:20 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:20 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:20 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.225969 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.306770 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.571752 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5lfc4"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.037815 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" event={"ID":"a5d55efc-e85a-4a02-a4ce-7355df9fea66","Type":"ContainerStarted","Data":"0043dacebf82e1e855679316749abf1572b578bb3df75e31802796bae6941f2f"} Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.278641 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:21 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:21 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:21 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.278971 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.331297 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.538590 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.540096 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.543364 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.543652 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.590844 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.620609 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.620667 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.707216 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.722370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.722411 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.722503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.795860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.862501 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.225795 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:22 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:22 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:22 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.225869 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.236232 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerStarted","Data":"e5a4148505fb0e4a5e1b82e8ef6c225248aab25fa9c4ba3beafd03def4b81975"} Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.832492 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.057986 4725 patch_prober.go:28] interesting pod/console-f9d7485db-75nfb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.058072 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-75nfb" podUID="b8859d17-62ea-47b3-ac63-537e69ec9f90" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.230283 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:23 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:23 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:23 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.230335 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.259056 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.285255 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.509752 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.509844 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.512198 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.512259 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.592320 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerStarted","Data":"1c73f4f8089a92a0b6b7a028dac6aeb69d5b46fdbc672c2e6ac12f358ca9bcec"} Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.648845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerStarted","Data":"27dd8d1e6821e290aee0dbac19d45303743aa4766fc6094ca7f43758325a4a79"} Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.667304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" event={"ID":"a5d55efc-e85a-4a02-a4ce-7355df9fea66","Type":"ContainerStarted","Data":"4141521a59d9a97f045efdc71ae0fcd4cedc726430929a21ff2b638ea2bb5d4d"} Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.669747 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.6697322230000005 podStartE2EDuration="4.669732223s" podCreationTimestamp="2026-01-20 11:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:23.666874264 +0000 UTC m=+171.875196237" watchObservedRunningTime="2026-01-20 11:07:23.669732223 +0000 UTC m=+171.878054196" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.279564 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.285701 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.715719 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" event={"ID":"a5d55efc-e85a-4a02-a4ce-7355df9fea66","Type":"ContainerStarted","Data":"71d14d5b8a89fa533d45e7a2e7ce7faed4b28b512da90a24eb88e6876290d391"} Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.764914 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerStarted","Data":"c17ac939ba1cf009322edad519220b6990322f13dc1944ac3985123b82ce45ca"} Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.783324 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerID="27dd8d1e6821e290aee0dbac19d45303743aa4766fc6094ca7f43758325a4a79" exitCode=0 Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.787830 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerDied","Data":"27dd8d1e6821e290aee0dbac19d45303743aa4766fc6094ca7f43758325a4a79"} Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.791343 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.791318997 podStartE2EDuration="3.791318997s" podCreationTimestamp="2026-01-20 11:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:24.788204719 +0000 UTC m=+172.996526692" watchObservedRunningTime="2026-01-20 11:07:24.791318997 +0000 UTC m=+172.999640970" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.791666 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5lfc4" podStartSLOduration=150.791661708 podStartE2EDuration="2m30.791661708s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:24.73973104 +0000 UTC m=+172.948053033" watchObservedRunningTime="2026-01-20 11:07:24.791661708 +0000 UTC m=+172.999983681" Jan 20 11:07:25 crc kubenswrapper[4725]: I0120 11:07:25.866381 4725 generic.go:334] "Generic (PLEG): container finished" podID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerID="c17ac939ba1cf009322edad519220b6990322f13dc1944ac3985123b82ce45ca" exitCode=0 Jan 20 11:07:25 crc kubenswrapper[4725]: I0120 11:07:25.868391 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerDied","Data":"c17ac939ba1cf009322edad519220b6990322f13dc1944ac3985123b82ce45ca"} Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.453164 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.632280 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.632393 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ec338f6-dfbe-4760-b504-c0ad09ff73e4" (UID: "3ec338f6-dfbe-4760-b504-c0ad09ff73e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.632819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.633302 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.657516 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ec338f6-dfbe-4760-b504-c0ad09ff73e4" (UID: "3ec338f6-dfbe-4760-b504-c0ad09ff73e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.727667 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.727732 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.736233 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.895649 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerDied","Data":"e5a4148505fb0e4a5e1b82e8ef6c225248aab25fa9c4ba3beafd03def4b81975"} Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.895708 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a4148505fb0e4a5e1b82e8ef6c225248aab25fa9c4ba3beafd03def4b81975" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.895668 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:27 crc kubenswrapper[4725]: I0120 11:07:27.911581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerDied","Data":"1c73f4f8089a92a0b6b7a028dac6aeb69d5b46fdbc672c2e6ac12f358ca9bcec"} Jan 20 11:07:27 crc kubenswrapper[4725]: I0120 11:07:27.911871 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c73f4f8089a92a0b6b7a028dac6aeb69d5b46fdbc672c2e6ac12f358ca9bcec" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.005921 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.102529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"1c3ba724-600e-4af4-ab50-ac02931703cd\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.102610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"1c3ba724-600e-4af4-ab50-ac02931703cd\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.103281 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c3ba724-600e-4af4-ab50-ac02931703cd" (UID: "1c3ba724-600e-4af4-ab50-ac02931703cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.160588 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c3ba724-600e-4af4-ab50-ac02931703cd" (UID: "1c3ba724-600e-4af4-ab50-ac02931703cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.204338 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.204368 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.919845 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.137962 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.144416 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419008 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419064 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419707 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419739 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419777 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.420590 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56"} pod="openshift-console/downloads-7954f5f757-2hmdd" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.420671 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" containerID="cri-o://c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56" gracePeriod=2 Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.421631 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.421652 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:34 crc kubenswrapper[4725]: I0120 11:07:34.024277 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerID="c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56" exitCode=0 Jan 20 11:07:34 crc kubenswrapper[4725]: I0120 11:07:34.024488 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerDied","Data":"c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56"} Jan 20 11:07:36 crc kubenswrapper[4725]: I0120 11:07:36.385950 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:42 crc kubenswrapper[4725]: I0120 11:07:42.793268 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:43 crc kubenswrapper[4725]: I0120 11:07:43.419057 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:43 crc kubenswrapper[4725]: I0120 11:07:43.419215 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:43 crc kubenswrapper[4725]: I0120 11:07:43.569358 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:53 crc kubenswrapper[4725]: I0120 11:07:53.420378 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:53 crc kubenswrapper[4725]: I0120 11:07:53.420997 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.727333 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.727894 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.727946 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.728539 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.728608 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665" gracePeriod=600 Jan 20 11:07:57 crc kubenswrapper[4725]: I0120 11:07:57.634940 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665" exitCode=0 Jan 20 11:07:57 crc kubenswrapper[4725]: I0120 11:07:57.635037 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665"} Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.516459 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.516717 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk8lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vs4qk_openshift-marketplace(98dafc65-0a7c-41fd-abc5-8e8fba03ffa9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.518316 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.812414 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.900283 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.900607 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkgp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6n4zh_openshift-marketplace(7ebdb343-11c1-4e64-9538-98ca4298b821): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.902244 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6n4zh" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146270 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 11:07:59 crc kubenswrapper[4725]: E0120 11:07:59.146599 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146611 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: E0120 11:07:59.146636 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146643 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146806 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146823 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.147301 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.150190 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.150639 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.150954 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.176662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.176786 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.280260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.280315 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.280417 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.314021 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.474632 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.139911 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.141916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.163277 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.240222 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.240265 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.240283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348768 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348836 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348855 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.349063 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.368893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.419494 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.419624 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.474393 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: E0120 11:08:03.912279 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6n4zh" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" Jan 20 11:08:04 crc kubenswrapper[4725]: E0120 11:08:04.002361 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 11:08:04 crc kubenswrapper[4725]: E0120 11:08:04.002838 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66ggl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-78bg4_openshift-marketplace(4f648359-ab53-49a7-8f1a-77281c2bd53c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:04 crc kubenswrapper[4725]: E0120 11:08:04.004001 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" Jan 20 11:08:07 crc kubenswrapper[4725]: E0120 11:08:07.299204 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" Jan 20 11:08:13 crc kubenswrapper[4725]: I0120 11:08:13.418586 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:13 crc kubenswrapper[4725]: I0120 11:08:13.419410 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.665944 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.666193 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8wq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vbr29_openshift-marketplace(247dcae1-930b-476d-abbe-f33c3da0730b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.667683 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vbr29" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.843578 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.843806 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8h6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8pplm_openshift-marketplace(1ba77d4b-0178-4730-8869-389efdf58851): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.845062 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8pplm" podUID="1ba77d4b-0178-4730-8869-389efdf58851" Jan 20 11:08:14 crc kubenswrapper[4725]: E0120 11:08:14.411153 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 11:08:14 crc kubenswrapper[4725]: E0120 11:08:14.411335 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8ntp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6nxjc_openshift-marketplace(7865a54a-be9b-4a0a-8c84-b45c8bfe40e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:14 crc kubenswrapper[4725]: E0120 11:08:14.412606 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6nxjc" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.891356 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vbr29" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.891529 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8pplm" podUID="1ba77d4b-0178-4730-8869-389efdf58851" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.892064 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6nxjc" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.976439 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.976627 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2n6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lxmdj_openshift-marketplace(39d02691-2128-45e8-841b-5bbf79e0a116): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.978099 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.986367 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.986525 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqcqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c2jtp_openshift-marketplace(10de7f77-2b14-4c56-b4db-ebb93422b89c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.989253 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" Jan 20 11:08:17 crc kubenswrapper[4725]: I0120 11:08:17.401107 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 11:08:17 crc kubenswrapper[4725]: I0120 11:08:17.411359 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.772384 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerStarted","Data":"85530cce234d8a705121a8934ff7069e86642c36409985a7688a7884b5e723ae"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.773792 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerStarted","Data":"f3aec21c53a64aee3c2463f463b5a0fee8ad405f9757e5a135714fa18e74494f"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.775729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.776961 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.777162 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.777207 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.780090 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f"} Jan 20 11:08:18 crc kubenswrapper[4725]: E0120 11:08:17.784476 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" Jan 20 11:08:18 crc kubenswrapper[4725]: E0120 11:08:17.785096 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.794880 4725 generic.go:334] "Generic (PLEG): container finished" podID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerID="058803a271e18294b6a526aecf968520aa7cedead52dfdc4165a6133e9e375f6" exitCode=0 Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.795052 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"058803a271e18294b6a526aecf968520aa7cedead52dfdc4165a6133e9e375f6"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.798982 4725 generic.go:334] "Generic (PLEG): container finished" podID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerID="e298ffa53486948221219263d81f91dd0aaf57b63b66a788f8e75324e688da37" exitCode=0 Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.799060 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"e298ffa53486948221219263d81f91dd0aaf57b63b66a788f8e75324e688da37"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.802554 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerStarted","Data":"bbb9f892391ca5a176419486af0aa396ba22c982eecb19372fb1e366d08efcd1"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.808453 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerStarted","Data":"64c2f0c49873a789ba7136c0ebf69a0326342714a2ec4617a64b11082bb0b9da"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.809689 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.809727 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.836798 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=19.836779333 podStartE2EDuration="19.836779333s" podCreationTimestamp="2026-01-20 11:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:08:18.836135943 +0000 UTC m=+227.044457926" watchObservedRunningTime="2026-01-20 11:08:18.836779333 +0000 UTC m=+227.045101306" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.858628 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=15.858609629 podStartE2EDuration="15.858609629s" podCreationTimestamp="2026-01-20 11:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:08:18.85575977 +0000 UTC m=+227.064081743" watchObservedRunningTime="2026-01-20 11:08:18.858609629 +0000 UTC m=+227.066931602" Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.859951 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerStarted","Data":"31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405"} Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.862381 4725 generic.go:334] "Generic (PLEG): container finished" podID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerID="64c2f0c49873a789ba7136c0ebf69a0326342714a2ec4617a64b11082bb0b9da" exitCode=0 Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.862878 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerDied","Data":"64c2f0c49873a789ba7136c0ebf69a0326342714a2ec4617a64b11082bb0b9da"} Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.863914 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.864062 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.886945 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vs4qk" podStartSLOduration=6.055612665 podStartE2EDuration="1m9.886922817s" podCreationTimestamp="2026-01-20 11:07:10 +0000 UTC" firstStartedPulling="2026-01-20 11:07:15.477412268 +0000 UTC m=+163.685734231" lastFinishedPulling="2026-01-20 11:08:19.3087224 +0000 UTC m=+227.517044383" observedRunningTime="2026-01-20 11:08:19.884801341 +0000 UTC m=+228.093123354" watchObservedRunningTime="2026-01-20 11:08:19.886922817 +0000 UTC m=+228.095244790" Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.917666 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.918110 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.925213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerStarted","Data":"a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19"} Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.943739 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6n4zh" podStartSLOduration=7.89163911 podStartE2EDuration="1m10.943709888s" podCreationTimestamp="2026-01-20 11:07:10 +0000 UTC" firstStartedPulling="2026-01-20 11:07:16.833477785 +0000 UTC m=+165.041799758" lastFinishedPulling="2026-01-20 11:08:19.885548563 +0000 UTC m=+228.093870536" observedRunningTime="2026-01-20 11:08:20.942084197 +0000 UTC m=+229.150406180" watchObservedRunningTime="2026-01-20 11:08:20.943709888 +0000 UTC m=+229.152031861" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.383330 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561265 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"3bad494d-da48-47e2-bcba-3908cecfbb5a\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561433 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"3bad494d-da48-47e2-bcba-3908cecfbb5a\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561477 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3bad494d-da48-47e2-bcba-3908cecfbb5a" (UID: "3bad494d-da48-47e2-bcba-3908cecfbb5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561874 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.700385 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3bad494d-da48-47e2-bcba-3908cecfbb5a" (UID: "3bad494d-da48-47e2-bcba-3908cecfbb5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.701504 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:22 crc kubenswrapper[4725]: I0120 11:08:22.061789 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerDied","Data":"f3aec21c53a64aee3c2463f463b5a0fee8ad405f9757e5a135714fa18e74494f"} Jan 20 11:08:22 crc kubenswrapper[4725]: I0120 11:08:22.062230 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3aec21c53a64aee3c2463f463b5a0fee8ad405f9757e5a135714fa18e74494f" Jan 20 11:08:22 crc kubenswrapper[4725]: I0120 11:08:22.062340 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.275786 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:23 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:23 crc kubenswrapper[4725]: > Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418578 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418659 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418684 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418746 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:24 crc kubenswrapper[4725]: I0120 11:08:24.072345 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerStarted","Data":"9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059"} Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.746315 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.746920 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.885637 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.904717 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.947488 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.274905 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerID="9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059" exitCode=0 Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.275770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059"} Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.892537 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.967686 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.852528 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.853871 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.857008 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.857061 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.906253 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.906487 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" containerID="cri-o://31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405" gracePeriod=2 Jan 20 11:08:34 crc kubenswrapper[4725]: I0120 11:08:34.906220 4725 generic.go:334] "Generic (PLEG): container finished" podID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerID="31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405" exitCode=0 Jan 20 11:08:34 crc kubenswrapper[4725]: I0120 11:08:34.906302 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405"} Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.829288 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.923423 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.929362 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh" (OuterVolumeSpecName: "kube-api-access-mk8lh") pod "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" (UID: "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9"). InnerVolumeSpecName "kube-api-access-mk8lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.935158 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.935529 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"42297e2c5e4314f8ac19bdb872ed1cfccfa8006702130dd94931f10251920fbc"} Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.935718 4725 scope.go:117] "RemoveContainer" containerID="31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.025239 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.025612 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.025852 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.026964 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities" (OuterVolumeSpecName: "utilities") pod "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" (UID: "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.096788 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" (UID: "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.126755 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.126836 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.265847 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.272199 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.940189 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" path="/var/lib/kubelet/pods/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9/volumes" Jan 20 11:08:39 crc kubenswrapper[4725]: I0120 11:08:39.960715 4725 scope.go:117] "RemoveContainer" containerID="058803a271e18294b6a526aecf968520aa7cedead52dfdc4165a6133e9e375f6" Jan 20 11:08:41 crc kubenswrapper[4725]: I0120 11:08:41.806842 4725 scope.go:117] "RemoveContainer" containerID="892418dd3e77ceab40f34a8a0fd5716151217dc2c55480d979119a50b49216a9" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418697 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418693 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418788 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418823 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418851 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419455 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29"} pod="openshift-console/downloads-7954f5f757-2hmdd" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419469 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419498 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" containerID="cri-o://224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29" gracePeriod=2 Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419516 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.975324 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerID="224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29" exitCode=0 Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.975369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerDied","Data":"224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29"} Jan 20 11:08:47 crc kubenswrapper[4725]: I0120 11:08:47.315626 4725 scope.go:117] "RemoveContainer" containerID="c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56" Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.011608 4725 generic.go:334] "Generic (PLEG): container finished" podID="247dcae1-930b-476d-abbe-f33c3da0730b" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.011696 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.014142 4725 generic.go:334] "Generic (PLEG): container finished" podID="39d02691-2128-45e8-841b-5bbf79e0a116" containerID="5d88e1156fdd2131fb13a542776647afc695e341abc2d0bb759d85d523d36656" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.014216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"5d88e1156fdd2131fb13a542776647afc695e341abc2d0bb759d85d523d36656"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.017624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"1081f83e5b2bc14f68fc29ac53c72e97033bcc38b173413314e21a99e6b6dbfc"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.018534 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.018630 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.018664 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.020960 4725 generic.go:334] "Generic (PLEG): container finished" podID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerID="3aebd70372873b9fbd7b4e02c72fa5025a0936f55bfdb8b39fafb1a0022fe117" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.021035 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"3aebd70372873b9fbd7b4e02c72fa5025a0936f55bfdb8b39fafb1a0022fe117"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.023465 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerStarted","Data":"4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.026316 4725 generic.go:334] "Generic (PLEG): container finished" podID="1ba77d4b-0178-4730-8869-389efdf58851" containerID="95b3efd0e36287cff3884a1d24955133183f96b36b4ed22b901a472384a7ccb9" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.026350 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"95b3efd0e36287cff3884a1d24955133183f96b36b4ed22b901a472384a7ccb9"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.028601 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerStarted","Data":"6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.103151 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-78bg4" podStartSLOduration=7.128327411 podStartE2EDuration="1m36.103124124s" podCreationTimestamp="2026-01-20 11:07:13 +0000 UTC" firstStartedPulling="2026-01-20 11:07:18.33313803 +0000 UTC m=+166.541460003" lastFinishedPulling="2026-01-20 11:08:47.307934743 +0000 UTC m=+255.516256716" observedRunningTime="2026-01-20 11:08:49.100645356 +0000 UTC m=+257.308967339" watchObservedRunningTime="2026-01-20 11:08:49.103124124 +0000 UTC m=+257.311446097" Jan 20 11:08:50 crc kubenswrapper[4725]: I0120 11:08:50.084595 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:50 crc kubenswrapper[4725]: I0120 11:08:50.084659 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.478107 4725 generic.go:334] "Generic (PLEG): container finished" podID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerID="6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9" exitCode=0 Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.478481 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.482292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerStarted","Data":"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.485561 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerStarted","Data":"d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.487952 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerStarted","Data":"a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.492328 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerStarted","Data":"fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.492957 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.493099 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.618695 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2jtp" podStartSLOduration=7.802667533 podStartE2EDuration="1m39.618673486s" podCreationTimestamp="2026-01-20 11:07:12 +0000 UTC" firstStartedPulling="2026-01-20 11:07:18.283402152 +0000 UTC m=+166.491724125" lastFinishedPulling="2026-01-20 11:08:50.099408115 +0000 UTC m=+258.307730078" observedRunningTime="2026-01-20 11:08:51.614199886 +0000 UTC m=+259.822521849" watchObservedRunningTime="2026-01-20 11:08:51.618673486 +0000 UTC m=+259.826995459" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.675636 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxmdj" podStartSLOduration=6.872450086 podStartE2EDuration="1m39.675617756s" podCreationTimestamp="2026-01-20 11:07:12 +0000 UTC" firstStartedPulling="2026-01-20 11:07:17.01723682 +0000 UTC m=+165.225558793" lastFinishedPulling="2026-01-20 11:08:49.82040449 +0000 UTC m=+258.028726463" observedRunningTime="2026-01-20 11:08:51.672130316 +0000 UTC m=+259.880452309" watchObservedRunningTime="2026-01-20 11:08:51.675617756 +0000 UTC m=+259.883939729" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.717898 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbr29" podStartSLOduration=7.505726522 podStartE2EDuration="1m41.717875203s" podCreationTimestamp="2026-01-20 11:07:10 +0000 UTC" firstStartedPulling="2026-01-20 11:07:15.650444834 +0000 UTC m=+163.858766807" lastFinishedPulling="2026-01-20 11:08:49.862593515 +0000 UTC m=+258.070915488" observedRunningTime="2026-01-20 11:08:51.692569388 +0000 UTC m=+259.900891371" watchObservedRunningTime="2026-01-20 11:08:51.717875203 +0000 UTC m=+259.926197176" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.718156 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8pplm" podStartSLOduration=9.837646032 podStartE2EDuration="1m42.718149332s" podCreationTimestamp="2026-01-20 11:07:09 +0000 UTC" firstStartedPulling="2026-01-20 11:07:16.931767114 +0000 UTC m=+165.140089077" lastFinishedPulling="2026-01-20 11:08:49.812270404 +0000 UTC m=+258.020592377" observedRunningTime="2026-01-20 11:08:51.715274781 +0000 UTC m=+259.923596754" watchObservedRunningTime="2026-01-20 11:08:51.718149332 +0000 UTC m=+259.926471315" Jan 20 11:08:52 crc kubenswrapper[4725]: I0120 11:08:52.992021 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:08:52 crc kubenswrapper[4725]: I0120 11:08:52.992771 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484207 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484527 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484238 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484675 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.743831 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.743994 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.102203 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:54 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:54 crc kubenswrapper[4725]: > Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.631672 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerStarted","Data":"7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806"} Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.653340 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.653397 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.872813 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nxjc" podStartSLOduration=7.796789785 podStartE2EDuration="1m41.872798066s" podCreationTimestamp="2026-01-20 11:07:13 +0000 UTC" firstStartedPulling="2026-01-20 11:07:18.42894251 +0000 UTC m=+166.637264483" lastFinishedPulling="2026-01-20 11:08:52.504950791 +0000 UTC m=+260.713272764" observedRunningTime="2026-01-20 11:08:54.87070837 +0000 UTC m=+263.079030343" watchObservedRunningTime="2026-01-20 11:08:54.872798066 +0000 UTC m=+263.081120039" Jan 20 11:08:55 crc kubenswrapper[4725]: I0120 11:08:55.072982 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:55 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:55 crc kubenswrapper[4725]: > Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059606 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.059936 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-utilities" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059952 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-utilities" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.059969 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-content" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059976 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-content" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.059988 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerName="pruner" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059996 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerName="pruner" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.060016 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060023 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060196 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerName="pruner" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060213 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060780 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062347 4725 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062619 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062840 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062899 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062936 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062984 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064256 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064442 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064457 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064498 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064507 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064525 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064533 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064544 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064552 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064560 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064568 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064580 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064587 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064599 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064608 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064749 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064763 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064773 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064782 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064791 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064800 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.116018 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:56 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:56 crc kubenswrapper[4725]: > Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.224637 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225065 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225098 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225126 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225145 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225162 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225181 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.327567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328298 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328548 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328677 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.329131 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.330512 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371521 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371631 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371657 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371704 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371733 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.043017 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" containerID="cri-o://6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34" gracePeriod=15 Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.657916 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.659563 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.660342 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578" exitCode=2 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.672244 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.675928 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678223 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7" exitCode=0 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678269 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de" exitCode=0 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678285 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89" exitCode=0 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678369 4725 scope.go:117] "RemoveContainer" containerID="809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.686035 4725 generic.go:334] "Generic (PLEG): container finished" podID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerID="6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34" exitCode=0 Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.686149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerDied","Data":"6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34"} Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.688272 4725 generic.go:334] "Generic (PLEG): container finished" podID="9d51d3df-3326-410b-b913-a269f46bb674" containerID="bbb9f892391ca5a176419486af0aa396ba22c982eecb19372fb1e366d08efcd1" exitCode=0 Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.688354 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerDied","Data":"bbb9f892391ca5a176419486af0aa396ba22c982eecb19372fb1e366d08efcd1"} Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.689019 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.694224 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.695404 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd" exitCode=0 Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.934563 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.939964 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.940564 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.940748 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.942846 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.943243 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.943470 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.943662 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975575 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975652 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975703 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975736 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975769 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975792 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975835 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975853 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975884 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975909 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975935 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975968 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.976012 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.976032 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.976051 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.977279 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978294 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978312 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978333 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978419 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978444 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978468 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978703 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.984719 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.985171 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.985675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw" (OuterVolumeSpecName: "kube-api-access-8d2rw") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "kube-api-access-8d2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.986647 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.987014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.987009 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.987542 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.018024 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.018120 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077729 4725 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077777 4725 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077794 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077809 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077824 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077836 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077852 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077869 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077885 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077897 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077909 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077921 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077934 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077945 4725 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077957 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077970 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077984 4725 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.703097 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerDied","Data":"f39928c8d7256975b95a8abe066b49247f38d754512e9fe57502d4feea0d8501"} Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.703154 4725 scope.go:117] "RemoveContainer" containerID="6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.703260 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.704742 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.705299 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.705512 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.707066 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.709014 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.730981 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.731049 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.732649 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.734221 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.739026 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.739942 4725 scope.go:117] "RemoveContainer" containerID="3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.740095 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.740714 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.741278 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.741789 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.748322 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.764878 4725 scope.go:117] "RemoveContainer" containerID="01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.786445 4725 scope.go:117] "RemoveContainer" containerID="660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.794140 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.796354 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.796812 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.801370 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.801888 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.805991 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.811438 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.815296 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.815359 4725 scope.go:117] "RemoveContainer" containerID="9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.815766 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.816112 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.816345 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.836107 4725 scope.go:117] "RemoveContainer" containerID="e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.851284 4725 scope.go:117] "RemoveContainer" containerID="b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.938963 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.977308 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.978007 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.978465 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.978814 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.979129 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089129 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"9d51d3df-3326-410b-b913-a269f46bb674\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"9d51d3df-3326-410b-b913-a269f46bb674\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089247 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"9d51d3df-3326-410b-b913-a269f46bb674\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089305 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9d51d3df-3326-410b-b913-a269f46bb674" (UID: "9d51d3df-3326-410b-b913-a269f46bb674"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089380 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock" (OuterVolumeSpecName: "var-lock") pod "9d51d3df-3326-410b-b913-a269f46bb674" (UID: "9d51d3df-3326-410b-b913-a269f46bb674"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089691 4725 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089715 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.094714 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9d51d3df-3326-410b-b913-a269f46bb674" (UID: "9d51d3df-3326-410b-b913-a269f46bb674"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.190760 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:01 crc kubenswrapper[4725]: E0120 11:09:01.205731 4725 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.206403 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:01 crc kubenswrapper[4725]: W0120 11:09:01.229377 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e WatchSource:0}: Error finding container fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e: Status 404 returned error can't find the container with id fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e Jan 20 11:09:01 crc kubenswrapper[4725]: E0120 11:09:01.232369 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c6bdad2b37894 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,LastTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.720096 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerDied","Data":"85530cce234d8a705121a8934ff7069e86642c36409985a7688a7884b5e723ae"} Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.720454 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85530cce234d8a705121a8934ff7069e86642c36409985a7688a7884b5e723ae" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.720135 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.721989 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e"} Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.737970 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.741377 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.741821 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.742229 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.767033 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.767783 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.768143 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.768320 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.768475 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.771295 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.771742 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.772003 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.772285 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.772664 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.087594 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.088072 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.091444 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.091946 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.107224 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b"} Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.136048 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.136860 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.137311 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.137683 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.137978 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.138251 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.171563 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.172128 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.172591 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.173148 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.173421 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.173706 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.430210 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.430801 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.431246 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.431518 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.431839 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.432060 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.432304 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.784694 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.785608 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786036 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786380 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786682 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786977 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.787275 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.787567 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.832003 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.832808 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.833411 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.833766 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834062 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834432 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834686 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834950 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.114220 4725 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.114631 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115048 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115578 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115797 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115989 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.116199 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.116386 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.433987 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.434932 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.436194 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.436765 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.437740 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.437811 4725 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.438411 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.511438 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.511494 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.551597 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.552152 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.552580 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.552855 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.553204 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.553487 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.553747 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.554109 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.554585 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.639524 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.698869 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.699580 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.699924 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.700281 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.700672 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.701191 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.701541 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.701848 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.702207 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.702459 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.735117 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.735841 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.736360 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.736820 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737141 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737410 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737623 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737884 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.738193 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.738435 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: E0120 11:09:05.040508 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.164051 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.164702 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.165396 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.165711 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.166011 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.166331 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.166667 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.167331 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.167925 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.168302 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: E0120 11:09:05.841534 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.932135 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.932824 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.933142 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.933437 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.933855 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.934231 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.934908 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.935207 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.935529 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.935770 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.949837 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.949879 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:06 crc kubenswrapper[4725]: E0120 11:09:06.950361 4725 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.950875 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:06 crc kubenswrapper[4725]: W0120 11:09:06.972567 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c WatchSource:0}: Error finding container dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c: Status 404 returned error can't find the container with id dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c Jan 20 11:09:07 crc kubenswrapper[4725]: I0120 11:09:07.194929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c"} Jan 20 11:09:07 crc kubenswrapper[4725]: E0120 11:09:07.442340 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 20 11:09:08 crc kubenswrapper[4725]: E0120 11:09:08.353747 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c6bdad2b37894 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,LastTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.219971 4725 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5e0ee4a8520f2950257bde6114c647cf2018446a23f9ee85a6195ee80f1f56b5" exitCode=0 Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.220111 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5e0ee4a8520f2950257bde6114c647cf2018446a23f9ee85a6195ee80f1f56b5"} Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.220331 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.220485 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.221120 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: E0120 11:09:10.221146 4725 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.221628 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.221848 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222033 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222237 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222490 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222755 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222987 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.223268 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: E0120 11:09:10.643127 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.233144 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.233493 4725 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b" exitCode=1 Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.233525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b"} Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.234495 4725 scope.go:117] "RemoveContainer" containerID="bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.241644 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242044 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242492 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242698 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242897 4725 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243109 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243297 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243475 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243659 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243838 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.942983 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.943730 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.944017 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.944436 4725 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.944789 4725 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.945202 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.945946 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.946245 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.946471 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.947388 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.947834 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:13 crc kubenswrapper[4725]: I0120 11:09:13.268461 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 11:09:13 crc kubenswrapper[4725]: I0120 11:09:13.268635 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1"} Jan 20 11:09:13 crc kubenswrapper[4725]: I0120 11:09:13.272678 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"63712aa375616fbb699d9ac705043ae2bc23a9d78e9375d0563fd696b1c43981"} Jan 20 11:09:16 crc kubenswrapper[4725]: I0120 11:09:14.290834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6cfb902102eae93f880d5ef7a90008815ea13a18c8ff67faea8ac54f1d76ad94"} Jan 20 11:09:16 crc kubenswrapper[4725]: I0120 11:09:14.925420 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:09:17 crc kubenswrapper[4725]: I0120 11:09:17.660411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b770a508b53ff718313002ad309dae0bc6d52414cdf6eb7477d3fe7aafffb1f"} Jan 20 11:09:18 crc kubenswrapper[4725]: I0120 11:09:18.679741 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e3a584572c50ff3363670c823b96eafff257bc8507772487be9d9e56f398344"} Jan 20 11:09:18 crc kubenswrapper[4725]: I0120 11:09:18.681004 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83a80a7126789b925a69fd547e0f7c325040d0f767b1efb3ba0ceec4cc88a515"} Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.686623 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.686689 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.687856 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.696380 4725 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.794053 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.794398 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.794482 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 11:09:20 crc kubenswrapper[4725]: I0120 11:09:20.692414 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:20 crc kubenswrapper[4725]: I0120 11:09:20.693209 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:21 crc kubenswrapper[4725]: I0120 11:09:21.053901 4725 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="27c0694a-974e-4403-b573-13de25d37a48" Jan 20 11:09:29 crc kubenswrapper[4725]: I0120 11:09:29.794599 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 11:09:29 crc kubenswrapper[4725]: I0120 11:09:29.796475 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.197759 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.243350 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.366653 4725 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.530210 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.769512 4725 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.044290 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.221988 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.540921 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.946809 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.103808 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.103833 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.105205 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.154069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.156016 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.278789 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.307962 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.314373 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.514278 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.554696 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.707422 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.779099 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.789378 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.871477 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.892639 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.128074 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.169897 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.264262 4725 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.280410 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.380178 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.482701 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.484802 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.526135 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.531548 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.599453 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.633737 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.856540 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.884204 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.919847 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.921124 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.115328 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.122907 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.145036 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.209242 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.232251 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.234930 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.281253 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.310316 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.311830 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.348662 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.404626 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.531904 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.533326 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.675202 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.722737 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.814533 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.821828 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.893453 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.933796 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.036758 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.129709 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.137222 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.177405 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.219270 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.627955 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.829617 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.904451 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.048922 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.151959 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.195148 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.195380 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.195630 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.423572 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.508640 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.509349 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.509676 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.608720 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.651437 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.655248 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.714707 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.748806 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.755141 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.778749 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.807516 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.840013 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.856716 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.872696 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.883223 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.957114 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.961829 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.061593 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.150063 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.198701 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.257924 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.287269 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.291450 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.322359 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.342534 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.503964 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.599344 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.621385 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.743647 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.766395 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.780346 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.795130 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.795717 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.807479 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.797431 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.807981 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.809366 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.809588 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1" gracePeriod=30 Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.817723 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.857729 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.968814 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.004299 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.120252 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.173382 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.220768 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.306118 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.371984 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.391545 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.503714 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.503987 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.503906 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.602178 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.605384 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.608387 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.612343 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.010490 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.014545 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.014557 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.014736 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.027162 4725 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032054 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032132 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-cxhjt","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:09:41 crc kubenswrapper[4725]: E0120 11:09:41.032425 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032450 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" Jan 20 11:09:41 crc kubenswrapper[4725]: E0120 11:09:41.032487 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d51d3df-3326-410b-b913-a269f46bb674" containerName="installer" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032502 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d51d3df-3326-410b-b913-a269f46bb674" containerName="installer" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032622 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032643 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d51d3df-3326-410b-b913-a269f46bb674" containerName="installer" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.033170 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.035888 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.038147 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.039161 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.042207 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.042729 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.042913 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043162 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043194 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043872 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043874 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.044245 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.044601 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.044774 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.054890 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.057542 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.066273 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.070208 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.070190026 podStartE2EDuration="22.070190026s" podCreationTimestamp="2026-01-20 11:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:09:41.067007726 +0000 UTC m=+309.275329709" watchObservedRunningTime="2026-01-20 11:09:41.070190026 +0000 UTC m=+309.278511999" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.096817 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.107863 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.107986 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108052 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108106 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108152 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108222 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108319 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108347 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9z5\" (UniqueName: \"kubernetes.io/projected/74629c1f-0986-4d9f-bdd4-3c0672715065-kube-api-access-wc9z5\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108416 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108456 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-policies\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-dir\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.152413 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-policies\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209352 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-dir\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209471 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-dir\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209485 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209628 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209776 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209913 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210128 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210214 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210297 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210449 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9z5\" (UniqueName: \"kubernetes.io/projected/74629c1f-0986-4d9f-bdd4-3c0672715065-kube-api-access-wc9z5\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210548 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.211649 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.211950 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-policies\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.212536 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.215343 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.215707 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.216394 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.216477 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.216920 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.217295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.219183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.219741 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.224327 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.227854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.228150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9z5\" (UniqueName: \"kubernetes.io/projected/74629c1f-0986-4d9f-bdd4-3c0672715065-kube-api-access-wc9z5\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.260231 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.292166 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.318876 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.351000 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.383758 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.411134 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.422821 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.480823 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.688404 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.696722 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.768582 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.800308 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-cxhjt"] Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.847049 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.951294 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.951345 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.957045 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.993056 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.002904 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.022626 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" event={"ID":"74629c1f-0986-4d9f-bdd4-3c0672715065","Type":"ContainerStarted","Data":"03145731ffb8eb9c63ff5569a81f25a7f2b68611beacd61f4ce3f7fc363299cf"} Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.027708 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.082825 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.315953 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.316300 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.317495 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.328288 4725 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.334302 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.404980 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.423896 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.440872 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.441322 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.493254 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.541236 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.569791 4725 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.570076 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" gracePeriod=5 Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.723510 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.835963 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.836752 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.846554 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.859119 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.942446 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" path="/var/lib/kubelet/pods/9a6106c0-75fa-4285-bc23-06ced58cf133/volumes" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.956416 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.967528 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.003526 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.028992 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575cc5b957-cxhjt_74629c1f-0986-4d9f-bdd4-3c0672715065/oauth-openshift/0.log" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.029042 4725 generic.go:334] "Generic (PLEG): container finished" podID="74629c1f-0986-4d9f-bdd4-3c0672715065" containerID="71d543bb382de7054da3bd8531a4cccaf979889db9ef36e5eb2c9452a7637aec" exitCode=255 Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.029128 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" event={"ID":"74629c1f-0986-4d9f-bdd4-3c0672715065","Type":"ContainerDied","Data":"71d543bb382de7054da3bd8531a4cccaf979889db9ef36e5eb2c9452a7637aec"} Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.029928 4725 scope.go:117] "RemoveContainer" containerID="71d543bb382de7054da3bd8531a4cccaf979889db9ef36e5eb2c9452a7637aec" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.123469 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.216318 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.264944 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.310630 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.328112 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.550796 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.582449 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.583646 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.678249 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.781699 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.829724 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.868231 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.954227 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.035550 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575cc5b957-cxhjt_74629c1f-0986-4d9f-bdd4-3c0672715065/oauth-openshift/0.log" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.036993 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" event={"ID":"74629c1f-0986-4d9f-bdd4-3c0672715065","Type":"ContainerStarted","Data":"519957c6b156057462815387b3f634d6978553198e161b60042b4c24c13cc669"} Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.037320 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.042783 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.046669 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.069744 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" podStartSLOduration=73.069725886 podStartE2EDuration="1m13.069725886s" podCreationTimestamp="2026-01-20 11:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:09:44.066422532 +0000 UTC m=+312.274744525" watchObservedRunningTime="2026-01-20 11:09:44.069725886 +0000 UTC m=+312.278047859" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.113132 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.130966 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.236609 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.298759 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.317681 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.434922 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.449202 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.541077 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.575895 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.768831 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.795164 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.825913 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.052649 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.172258 4725 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.214383 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.265519 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.453532 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.503180 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.519394 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.546806 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.607502 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.645110 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.763817 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.770846 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.819362 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.937453 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.025936 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.274722 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.361799 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.413184 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.582951 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.644047 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.777732 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.820763 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.968163 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 11:09:48 crc kubenswrapper[4725]: I0120 11:09:48.983105 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 11:09:48 crc kubenswrapper[4725]: I0120 11:09:48.983649 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063380 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063469 4725 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" exitCode=137 Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063545 4725 scope.go:117] "RemoveContainer" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063797 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107023 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107131 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107176 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107224 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107242 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107512 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107547 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107704 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107714 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.110173 4725 scope.go:117] "RemoveContainer" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" Jan 20 11:09:49 crc kubenswrapper[4725]: E0120 11:09:49.110719 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b\": container with ID starting with 6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b not found: ID does not exist" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.110841 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b"} err="failed to get container status \"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b\": rpc error: code = NotFound desc = could not find container \"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b\": container with ID starting with 6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b not found: ID does not exist" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.116368 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209013 4725 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209061 4725 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209073 4725 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209101 4725 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209111 4725 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:50 crc kubenswrapper[4725]: I0120 11:09:50.938413 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 20 11:09:56 crc kubenswrapper[4725]: I0120 11:09:56.704653 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 11:09:57 crc kubenswrapper[4725]: I0120 11:09:57.125198 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 11:09:57 crc kubenswrapper[4725]: I0120 11:09:57.197329 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 11:09:59 crc kubenswrapper[4725]: I0120 11:09:59.774530 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 11:10:00 crc kubenswrapper[4725]: I0120 11:10:00.271970 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 11:10:00 crc kubenswrapper[4725]: I0120 11:10:00.371991 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 11:10:01 crc kubenswrapper[4725]: I0120 11:10:01.562947 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 11:10:02 crc kubenswrapper[4725]: I0120 11:10:02.599482 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 11:10:02 crc kubenswrapper[4725]: I0120 11:10:02.685830 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 11:10:03 crc kubenswrapper[4725]: I0120 11:10:03.672187 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 11:10:04 crc kubenswrapper[4725]: I0120 11:10:04.324585 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 11:10:05 crc kubenswrapper[4725]: I0120 11:10:05.737998 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 11:10:07 crc kubenswrapper[4725]: I0120 11:10:07.828728 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 11:10:08 crc kubenswrapper[4725]: I0120 11:10:08.487034 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 11:10:10 crc kubenswrapper[4725]: I0120 11:10:10.074018 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.256392 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258581 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258651 4725 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1" exitCode=137 Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258698 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1"} Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258746 4725 scope.go:117] "RemoveContainer" containerID="bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b" Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.008911 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.267033 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.268519 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"419df3a758f9387d9c10937abbec55a1db175c3d47cba10ac5d6f26113c8f2a1"} Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.626330 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 11:10:13 crc kubenswrapper[4725]: I0120 11:10:13.071752 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:10:14 crc kubenswrapper[4725]: I0120 11:10:14.396958 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 11:10:14 crc kubenswrapper[4725]: I0120 11:10:14.925961 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:18 crc kubenswrapper[4725]: I0120 11:10:18.353355 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 11:10:18 crc kubenswrapper[4725]: I0120 11:10:18.616913 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 11:10:19 crc kubenswrapper[4725]: I0120 11:10:19.793363 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:19 crc kubenswrapper[4725]: I0120 11:10:19.799059 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:21 crc kubenswrapper[4725]: I0120 11:10:21.530623 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 11:10:23 crc kubenswrapper[4725]: I0120 11:10:23.069988 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 11:10:24 crc kubenswrapper[4725]: I0120 11:10:24.930015 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:26 crc kubenswrapper[4725]: I0120 11:10:26.728407 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:10:26 crc kubenswrapper[4725]: I0120 11:10:26.728478 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:10:27 crc kubenswrapper[4725]: I0120 11:10:27.310601 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 11:10:28 crc kubenswrapper[4725]: I0120 11:10:28.433761 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.315376 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.315969 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" containerID="cri-o://ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" gracePeriod=30 Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.405286 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.405537 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" containerID="cri-o://0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" gracePeriod=30 Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.898698 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.904617 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.071962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.072545 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.072951 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073794 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073849 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.074369 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073935 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.074950 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.074892 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config" (OuterVolumeSpecName: "config") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075456 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075492 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075513 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075978 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca" (OuterVolumeSpecName: "client-ca") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.076915 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config" (OuterVolumeSpecName: "config") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079012 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7" (OuterVolumeSpecName: "kube-api-access-7kmh7") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "kube-api-access-7kmh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079337 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q" (OuterVolumeSpecName: "kube-api-access-87v9q") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "kube-api-access-87v9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079552 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176823 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176867 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176888 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176905 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176923 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176939 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437303 4725 generic.go:334] "Generic (PLEG): container finished" podID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" exitCode=0 Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437381 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerDied","Data":"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437412 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerDied","Data":"ade77836dcd269f9c5de0b97ad651f7a735e267f67b9c6aa9acfc5f72e48f82f"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437408 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437430 4725 scope.go:117] "RemoveContainer" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440548 4725 generic.go:334] "Generic (PLEG): container finished" podID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" exitCode=0 Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440588 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerDied","Data":"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerDied","Data":"3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440555 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.463564 4725 scope.go:117] "RemoveContainer" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" Jan 20 11:10:31 crc kubenswrapper[4725]: E0120 11:10:31.464550 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a\": container with ID starting with 0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a not found: ID does not exist" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.464611 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a"} err="failed to get container status \"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a\": rpc error: code = NotFound desc = could not find container \"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a\": container with ID starting with 0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a not found: ID does not exist" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.464648 4725 scope.go:117] "RemoveContainer" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.475709 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.483436 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.489754 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.494400 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.506627 4725 scope.go:117] "RemoveContainer" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" Jan 20 11:10:31 crc kubenswrapper[4725]: E0120 11:10:31.507345 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053\": container with ID starting with ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053 not found: ID does not exist" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.507398 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053"} err="failed to get container status \"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053\": rpc error: code = NotFound desc = could not find container \"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053\": container with ID starting with ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053 not found: ID does not exist" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.364846 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:32 crc kubenswrapper[4725]: E0120 11:10:32.365420 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365443 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: E0120 11:10:32.365478 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365492 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: E0120 11:10:32.365502 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365509 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365639 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365655 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365669 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.366362 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.371135 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.372564 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.372748 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.373898 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.373925 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.374153 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.376565 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.376695 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.378688 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.379830 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.380455 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.381002 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.381903 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.383420 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.383741 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.393148 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.423775 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499585 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499838 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499869 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499903 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500190 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500429 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601453 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601519 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601620 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601653 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601675 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.603021 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.604208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.610377 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.610411 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.610757 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.611158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.611669 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.626016 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.627212 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.712365 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.727804 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.931298 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.949530 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" path="/var/lib/kubelet/pods/600286e6-beb3-40f1-9077-9c8abf34d55a/volumes" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.951107 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" path="/var/lib/kubelet/pods/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39/volumes" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.976339 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:32 crc kubenswrapper[4725]: W0120 11:10:32.983488 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9639e6c8_b710_4924_83fd_88fddbc3685a.slice/crio-634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2 WatchSource:0}: Error finding container 634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2: Status 404 returned error can't find the container with id 634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2 Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.458895 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerStarted","Data":"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.458967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerStarted","Data":"d2641cc78f4a233f2cdc04c78b092c86c19dcb9f83e6fab7f6cf33b38f6cf72a"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.458984 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.460602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerStarted","Data":"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.460649 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerStarted","Data":"634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.460796 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.497631 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" podStartSLOduration=3.497610682 podStartE2EDuration="3.497610682s" podCreationTimestamp="2026-01-20 11:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:33.490111154 +0000 UTC m=+361.698433127" watchObservedRunningTime="2026-01-20 11:10:33.497610682 +0000 UTC m=+361.705932655" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.499431 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.940740 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" podStartSLOduration=3.940721367 podStartE2EDuration="3.940721367s" podCreationTimestamp="2026-01-20 11:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:33.841273357 +0000 UTC m=+362.049595340" watchObservedRunningTime="2026-01-20 11:10:33.940721367 +0000 UTC m=+362.149043340" Jan 20 11:10:34 crc kubenswrapper[4725]: I0120 11:10:34.209716 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.074896 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.108375 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.625639 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" containerID="cri-o://0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" gracePeriod=30 Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.625598 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" containerID="cri-o://94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" gracePeriod=30 Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.046694 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.051113 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162852 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162953 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162979 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163002 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163037 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163104 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163141 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163158 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164193 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca" (OuterVolumeSpecName: "client-ca") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164765 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config" (OuterVolumeSpecName: "config") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164858 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config" (OuterVolumeSpecName: "config") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.165223 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.168422 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.169672 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.169721 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk" (OuterVolumeSpecName: "kube-api-access-rvnbk") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "kube-api-access-rvnbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.176229 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv" (OuterVolumeSpecName: "kube-api-access-qpfzv") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "kube-api-access-qpfzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.265518 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.265992 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266037 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266061 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266101 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266127 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266143 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266161 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266174 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.308719 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.308980 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.308998 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.309022 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309029 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309599 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309623 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309997 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.314328 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.314981 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.320012 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.330740 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366832 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366856 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366895 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366909 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366927 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366950 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366966 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366993 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.468224 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.468775 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.468991 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.469366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.469624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.469912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470445 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470697 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.471893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470460 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.472774 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.473497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.474649 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.475682 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.478060 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.490831 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.493874 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.623692 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632014 4725 generic.go:334] "Generic (PLEG): container finished" podID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" exitCode=0 Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632095 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632109 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerDied","Data":"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632142 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerDied","Data":"d2641cc78f4a233f2cdc04c78b092c86c19dcb9f83e6fab7f6cf33b38f6cf72a"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632162 4725 scope.go:117] "RemoveContainer" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632176 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.633959 4725 generic.go:334] "Generic (PLEG): container finished" podID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" exitCode=0 Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.633985 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerDied","Data":"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.634004 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerDied","Data":"634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.634052 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.661966 4725 scope.go:117] "RemoveContainer" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.664240 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8\": container with ID starting with 0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8 not found: ID does not exist" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.664291 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8"} err="failed to get container status \"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8\": rpc error: code = NotFound desc = could not find container \"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8\": container with ID starting with 0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8 not found: ID does not exist" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.664331 4725 scope.go:117] "RemoveContainer" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.664940 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.684921 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.691178 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.691288 4725 scope.go:117] "RemoveContainer" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.691767 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8\": container with ID starting with 94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8 not found: ID does not exist" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.691810 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8"} err="failed to get container status \"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8\": rpc error: code = NotFound desc = could not find container \"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8\": container with ID starting with 94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8 not found: ID does not exist" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.696168 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.031659 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.079722 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:10:38 crc kubenswrapper[4725]: W0120 11:10:38.089421 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8409bf_69df_4201_9a4b_e2462760929d.slice/crio-0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283 WatchSource:0}: Error finding container 0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283: Status 404 returned error can't find the container with id 0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283 Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.640960 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerStarted","Data":"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.641005 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerStarted","Data":"856c5e58827f572d795e8b9e0bf4456fad7a0ebce5396897689f34b89161e927"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.642140 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.644249 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerStarted","Data":"1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.644273 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerStarted","Data":"0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.645097 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.648683 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.663835 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" podStartSLOduration=1.663820013 podStartE2EDuration="1.663820013s" podCreationTimestamp="2026-01-20 11:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:38.660295431 +0000 UTC m=+366.868617404" watchObservedRunningTime="2026-01-20 11:10:38.663820013 +0000 UTC m=+366.872141986" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.679641 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" podStartSLOduration=1.679625473 podStartE2EDuration="1.679625473s" podCreationTimestamp="2026-01-20 11:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:38.67794622 +0000 UTC m=+366.886268213" watchObservedRunningTime="2026-01-20 11:10:38.679625473 +0000 UTC m=+366.887947446" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.743198 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.938550 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" path="/var/lib/kubelet/pods/95b59a23-ecd6-4f96-bf93-ffc1efdefc25/volumes" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.939577 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" path="/var/lib/kubelet/pods/9639e6c8-b710-4924-83fd-88fddbc3685a/volumes" Jan 20 11:10:47 crc kubenswrapper[4725]: I0120 11:10:47.912283 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:10:47 crc kubenswrapper[4725]: I0120 11:10:47.912961 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbr29" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" containerID="cri-o://3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" gracePeriod=2 Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.322587 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.402058 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"247dcae1-930b-476d-abbe-f33c3da0730b\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.402205 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"247dcae1-930b-476d-abbe-f33c3da0730b\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.402339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"247dcae1-930b-476d-abbe-f33c3da0730b\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.403140 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities" (OuterVolumeSpecName: "utilities") pod "247dcae1-930b-476d-abbe-f33c3da0730b" (UID: "247dcae1-930b-476d-abbe-f33c3da0730b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.415574 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8" (OuterVolumeSpecName: "kube-api-access-z8wq8") pod "247dcae1-930b-476d-abbe-f33c3da0730b" (UID: "247dcae1-930b-476d-abbe-f33c3da0730b"). InnerVolumeSpecName "kube-api-access-z8wq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.487224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "247dcae1-930b-476d-abbe-f33c3da0730b" (UID: "247dcae1-930b-476d-abbe-f33c3da0730b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.503916 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.503985 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.504009 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701362 4725 generic.go:334] "Generic (PLEG): container finished" podID="247dcae1-930b-476d-abbe-f33c3da0730b" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" exitCode=0 Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701412 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3"} Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701448 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"01a79750127c09ea5c6dc20b661d6675fdb1d12c0c260ea3667e9b8f6125164f"} Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701470 4725 scope.go:117] "RemoveContainer" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701604 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.737322 4725 scope.go:117] "RemoveContainer" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.738217 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.747682 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.766174 4725 scope.go:117] "RemoveContainer" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.786679 4725 scope.go:117] "RemoveContainer" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" Jan 20 11:10:48 crc kubenswrapper[4725]: E0120 11:10:48.787206 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3\": container with ID starting with 3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3 not found: ID does not exist" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787249 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3"} err="failed to get container status \"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3\": rpc error: code = NotFound desc = could not find container \"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3\": container with ID starting with 3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3 not found: ID does not exist" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787286 4725 scope.go:117] "RemoveContainer" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" Jan 20 11:10:48 crc kubenswrapper[4725]: E0120 11:10:48.787547 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b\": container with ID starting with 6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b not found: ID does not exist" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787575 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b"} err="failed to get container status \"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b\": rpc error: code = NotFound desc = could not find container \"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b\": container with ID starting with 6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b not found: ID does not exist" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787594 4725 scope.go:117] "RemoveContainer" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" Jan 20 11:10:48 crc kubenswrapper[4725]: E0120 11:10:48.788007 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72\": container with ID starting with 319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72 not found: ID does not exist" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.788129 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72"} err="failed to get container status \"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72\": rpc error: code = NotFound desc = could not find container \"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72\": container with ID starting with 319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72 not found: ID does not exist" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.796022 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.796396 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" containerID="cri-o://096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" gracePeriod=30 Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.939948 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" path="/var/lib/kubelet/pods/247dcae1-930b-476d-abbe-f33c3da0730b/volumes" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.394105 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503720 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503793 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503923 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503992 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.504958 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config" (OuterVolumeSpecName: "config") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.504943 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.516272 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.516345 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm" (OuterVolumeSpecName: "kube-api-access-cn9dm") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "kube-api-access-cn9dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605600 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605648 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605702 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605715 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712141 4725 generic.go:334] "Generic (PLEG): container finished" podID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" exitCode=0 Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712207 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerDied","Data":"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded"} Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerDied","Data":"856c5e58827f572d795e8b9e0bf4456fad7a0ebce5396897689f34b89161e927"} Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712257 4725 scope.go:117] "RemoveContainer" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712261 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.729946 4725 scope.go:117] "RemoveContainer" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" Jan 20 11:10:49 crc kubenswrapper[4725]: E0120 11:10:49.730752 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded\": container with ID starting with 096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded not found: ID does not exist" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.730842 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded"} err="failed to get container status \"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded\": rpc error: code = NotFound desc = could not find container \"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded\": container with ID starting with 096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded not found: ID does not exist" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.753160 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.753475 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.309118 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.309730 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" containerID="cri-o://4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784" gracePeriod=2 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.512887 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.513223 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" containerID="cri-o://d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48" gracePeriod=2 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.633802 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz"] Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634071 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-content" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634102 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-content" Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634124 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634132 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634143 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634149 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634164 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-utilities" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634172 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-utilities" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634281 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634293 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.637623 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.637839 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.638685 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.639069 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.639343 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.640506 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.647030 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.725340 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerID="4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784" exitCode=0 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.725426 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784"} Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.727903 4725 generic.go:334] "Generic (PLEG): container finished" podID="39d02691-2128-45e8-841b-5bbf79e0a116" containerID="d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48" exitCode=0 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.727937 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48"} Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889046 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-config\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889143 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhfv\" (UniqueName: \"kubernetes.io/projected/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-kube-api-access-5jhfv\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889196 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-client-ca\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-serving-cert\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.937674 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" path="/var/lib/kubelet/pods/aef2156e-ea5d-4a60-83f6-8b7e79400a0f/volumes" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.990331 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-config\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.990518 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhfv\" (UniqueName: \"kubernetes.io/projected/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-kube-api-access-5jhfv\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.990637 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-client-ca\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.991582 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-serving-cert\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.992721 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-config\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.992229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-client-ca\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.997525 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-serving-cert\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.017809 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhfv\" (UniqueName: \"kubernetes.io/projected/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-kube-api-access-5jhfv\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.072560 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.093890 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"4f648359-ab53-49a7-8f1a-77281c2bd53c\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.094140 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"4f648359-ab53-49a7-8f1a-77281c2bd53c\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.094207 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"4f648359-ab53-49a7-8f1a-77281c2bd53c\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.095429 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities" (OuterVolumeSpecName: "utilities") pod "4f648359-ab53-49a7-8f1a-77281c2bd53c" (UID: "4f648359-ab53-49a7-8f1a-77281c2bd53c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.099330 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl" (OuterVolumeSpecName: "kube-api-access-66ggl") pod "4f648359-ab53-49a7-8f1a-77281c2bd53c" (UID: "4f648359-ab53-49a7-8f1a-77281c2bd53c"). InnerVolumeSpecName "kube-api-access-66ggl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.178662 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195233 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"39d02691-2128-45e8-841b-5bbf79e0a116\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195333 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"39d02691-2128-45e8-841b-5bbf79e0a116\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195473 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"39d02691-2128-45e8-841b-5bbf79e0a116\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195749 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195763 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.197597 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities" (OuterVolumeSpecName: "utilities") pod "39d02691-2128-45e8-841b-5bbf79e0a116" (UID: "39d02691-2128-45e8-841b-5bbf79e0a116"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.200887 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d" (OuterVolumeSpecName: "kube-api-access-d2n6d") pod "39d02691-2128-45e8-841b-5bbf79e0a116" (UID: "39d02691-2128-45e8-841b-5bbf79e0a116"). InnerVolumeSpecName "kube-api-access-d2n6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.230459 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39d02691-2128-45e8-841b-5bbf79e0a116" (UID: "39d02691-2128-45e8-841b-5bbf79e0a116"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.239824 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f648359-ab53-49a7-8f1a-77281c2bd53c" (UID: "4f648359-ab53-49a7-8f1a-77281c2bd53c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.285482 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296665 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296703 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296714 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296722 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.619654 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.734271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" event={"ID":"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4","Type":"ContainerStarted","Data":"92b421569e49355818c265e4463cebaf6267eea5a055a89a0398f40dd35cafa0"} Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.736865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"947644fa4cdb3ece3385cefa57c8a4ab47c9b07453257db4d816fb94806bf10c"} Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.736887 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.736921 4725 scope.go:117] "RemoveContainer" containerID="d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.740713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"c8cf137c59938a71804fd93575de29dac65e3fbdae7d9616af8e1e0e425812c7"} Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.740855 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.759626 4725 scope.go:117] "RemoveContainer" containerID="5d88e1156fdd2131fb13a542776647afc695e341abc2d0bb759d85d523d36656" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.779064 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.781929 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.786274 4725 scope.go:117] "RemoveContainer" containerID="bef010ae40f12ebf94868b1a7f63b8c8ce98852cd1c4ccb364c0b676606ca709" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.793320 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.796434 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.808379 4725 scope.go:117] "RemoveContainer" containerID="4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.833186 4725 scope.go:117] "RemoveContainer" containerID="9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.850923 4725 scope.go:117] "RemoveContainer" containerID="06596abc1be5a61b774b86675bea7d758f393f271eafec99aee9e0618b84133b" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.748181 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" event={"ID":"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4","Type":"ContainerStarted","Data":"ce9608d63883a216710a955470d15d5b6a6b43b3842886a25e3377acd9d6cd05"} Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.748500 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.753374 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.794207 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" podStartSLOduration=4.794192841 podStartE2EDuration="4.794192841s" podCreationTimestamp="2026-01-20 11:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:52.768466915 +0000 UTC m=+380.976788888" watchObservedRunningTime="2026-01-20 11:10:52.794192841 +0000 UTC m=+381.002514814" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.940522 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" path="/var/lib/kubelet/pods/39d02691-2128-45e8-841b-5bbf79e0a116/volumes" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.941660 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" path="/var/lib/kubelet/pods/4f648359-ab53-49a7-8f1a-77281c2bd53c/volumes" Jan 20 11:10:56 crc kubenswrapper[4725]: I0120 11:10:56.727439 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:10:56 crc kubenswrapper[4725]: I0120 11:10:56.728230 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042470 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xs9z9"] Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042795 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042816 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042838 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042846 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042857 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042866 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042875 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042882 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042893 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042900 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042912 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042921 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.043030 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.043050 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.043603 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.056764 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xs9z9"] Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225542 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/877b47a7-ec29-4467-a0c7-a4561a12573b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225640 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-certificates\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-trusted-ca\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225695 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmjnf\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-kube-api-access-fmjnf\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225715 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/877b47a7-ec29-4467-a0c7-a4561a12573b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225748 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225766 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-tls\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225786 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-bound-sa-token\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.275503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327065 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-trusted-ca\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmjnf\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-kube-api-access-fmjnf\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327208 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/877b47a7-ec29-4467-a0c7-a4561a12573b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327233 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-tls\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327253 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-bound-sa-token\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327284 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/877b47a7-ec29-4467-a0c7-a4561a12573b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327323 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-certificates\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.328821 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-certificates\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.328849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-trusted-ca\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.329562 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/877b47a7-ec29-4467-a0c7-a4561a12573b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.334609 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/877b47a7-ec29-4467-a0c7-a4561a12573b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.334736 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-tls\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.348610 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmjnf\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-kube-api-access-fmjnf\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.358979 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-bound-sa-token\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.364224 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.854475 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xs9z9"] Jan 20 11:10:57 crc kubenswrapper[4725]: W0120 11:10:57.863474 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877b47a7_ec29_4467_a0c7_a4561a12573b.slice/crio-4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed WatchSource:0}: Error finding container 4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed: Status 404 returned error can't find the container with id 4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.790772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" event={"ID":"877b47a7-ec29-4467-a0c7-a4561a12573b","Type":"ContainerStarted","Data":"42f5a1cb1e396971fd427e7f5b06701f7c76c63599e3407c4a255735d51ccbd3"} Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.791492 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" event={"ID":"877b47a7-ec29-4467-a0c7-a4561a12573b","Type":"ContainerStarted","Data":"4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed"} Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.791529 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.815873 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" podStartSLOduration=1.815856248 podStartE2EDuration="1.815856248s" podCreationTimestamp="2026-01-20 11:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:58.81214836 +0000 UTC m=+387.020470343" watchObservedRunningTime="2026-01-20 11:10:58.815856248 +0000 UTC m=+387.024178221" Jan 20 11:11:17 crc kubenswrapper[4725]: I0120 11:11:17.369463 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:11:17 crc kubenswrapper[4725]: I0120 11:11:17.428717 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.727872 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.729301 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.729414 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.730583 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.730729 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f" gracePeriod=600 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.747668 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.748212 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" containerID="cri-o://1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94" gracePeriod=30 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973726 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f" exitCode=0 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973800 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f"} Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b"} Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973871 4725 scope.go:117] "RemoveContainer" containerID="1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665" Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.975393 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b8409bf-69df-4201-9a4b-e2462760929d" containerID="1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94" exitCode=0 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.975422 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerDied","Data":"1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94"} Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.190726 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.259990 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260044 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260062 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260098 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260127 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260954 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260988 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.261026 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config" (OuterVolumeSpecName: "config") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.265185 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.265640 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f" (OuterVolumeSpecName: "kube-api-access-x957f") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "kube-api-access-x957f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361253 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361301 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361531 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361548 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361561 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.984477 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerDied","Data":"0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283"} Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.984512 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.984533 4725 scope.go:117] "RemoveContainer" containerID="1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94" Jan 20 11:11:29 crc kubenswrapper[4725]: I0120 11:11:29.009244 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:11:29 crc kubenswrapper[4725]: I0120 11:11:29.012740 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.338965 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-556d6dff97-md6hn"] Jan 20 11:11:30 crc kubenswrapper[4725]: E0120 11:11:30.340298 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.340437 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.340710 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.341498 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.347963 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.348708 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.349031 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.349350 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.349747 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.354209 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556d6dff97-md6hn"] Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.354742 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.356359 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493436 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-proxy-ca-bundles\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493493 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-config\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493531 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm27l\" (UniqueName: \"kubernetes.io/projected/b60a1413-98c5-44fe-ada4-9df9946861cd-kube-api-access-sm27l\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493570 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60a1413-98c5-44fe-ada4-9df9946861cd-serving-cert\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-client-ca\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.594749 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-proxy-ca-bundles\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.594809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-config\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.595510 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm27l\" (UniqueName: \"kubernetes.io/projected/b60a1413-98c5-44fe-ada4-9df9946861cd-kube-api-access-sm27l\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.595568 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60a1413-98c5-44fe-ada4-9df9946861cd-serving-cert\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.596800 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-client-ca\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.597909 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-client-ca\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.600024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-config\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.607551 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-proxy-ca-bundles\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.611458 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60a1413-98c5-44fe-ada4-9df9946861cd-serving-cert\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.622761 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm27l\" (UniqueName: \"kubernetes.io/projected/b60a1413-98c5-44fe-ada4-9df9946861cd-kube-api-access-sm27l\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.669911 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.878517 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556d6dff97-md6hn"] Jan 20 11:11:30 crc kubenswrapper[4725]: W0120 11:11:30.887559 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb60a1413_98c5_44fe_ada4_9df9946861cd.slice/crio-b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9 WatchSource:0}: Error finding container b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9: Status 404 returned error can't find the container with id b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9 Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.938008 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" path="/var/lib/kubelet/pods/8b8409bf-69df-4201-9a4b-e2462760929d/volumes" Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.336772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" event={"ID":"b60a1413-98c5-44fe-ada4-9df9946861cd","Type":"ContainerStarted","Data":"61c3300e12afe758a90625c8afe2eabc06fca38bc245748d9b5561034d5d4340"} Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.336827 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" event={"ID":"b60a1413-98c5-44fe-ada4-9df9946861cd","Type":"ContainerStarted","Data":"b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9"} Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.337606 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.341730 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.360617 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" podStartSLOduration=4.360593026 podStartE2EDuration="4.360593026s" podCreationTimestamp="2026-01-20 11:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:11:31.359173711 +0000 UTC m=+419.567495694" watchObservedRunningTime="2026-01-20 11:11:31.360593026 +0000 UTC m=+419.568914999" Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.865511 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.866472 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6n4zh" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" containerID="cri-o://a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19" gracePeriod=30 Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.872397 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.872718 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8pplm" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" containerID="cri-o://fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1" gracePeriod=30 Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.887093 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.888426 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" containerID="cri-o://4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488" gracePeriod=30 Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.889750 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.890147 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" containerID="cri-o://a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee" gracePeriod=30 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.050743 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.051207 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nxjc" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" containerID="cri-o://7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806" gracePeriod=30 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.060972 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-htj9r"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.061695 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.075504 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-htj9r"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.165421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.165482 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjt4\" (UniqueName: \"kubernetes.io/projected/5666b0dd-5364-4bee-a091-26fa796770cf-kube-api-access-6pjt4\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.165565 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.266846 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.266900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjt4\" (UniqueName: \"kubernetes.io/projected/5666b0dd-5364-4bee-a091-26fa796770cf-kube-api-access-6pjt4\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.266926 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.268053 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.279158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.292320 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjt4\" (UniqueName: \"kubernetes.io/projected/5666b0dd-5364-4bee-a091-26fa796770cf-kube-api-access-6pjt4\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.381007 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.393475 4725 generic.go:334] "Generic (PLEG): container finished" podID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerID="4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.393550 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerDied","Data":"4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.395922 4725 generic.go:334] "Generic (PLEG): container finished" podID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerID="a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.395978 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.397691 4725 generic.go:334] "Generic (PLEG): container finished" podID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerID="a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.397736 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.401084 4725 generic.go:334] "Generic (PLEG): container finished" podID="1ba77d4b-0178-4730-8869-389efdf58851" containerID="fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.401171 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.476895 4725 generic.go:334] "Generic (PLEG): container finished" podID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerID="7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.476966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.510287 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.681527 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"502a4051-5a60-4e90-a3f2-7dc035950a9b\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.681640 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"502a4051-5a60-4e90-a3f2-7dc035950a9b\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.681696 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"502a4051-5a60-4e90-a3f2-7dc035950a9b\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.682976 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "502a4051-5a60-4e90-a3f2-7dc035950a9b" (UID: "502a4051-5a60-4e90-a3f2-7dc035950a9b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.688463 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "502a4051-5a60-4e90-a3f2-7dc035950a9b" (UID: "502a4051-5a60-4e90-a3f2-7dc035950a9b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.690649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd" (OuterVolumeSpecName: "kube-api-access-qbmfd") pod "502a4051-5a60-4e90-a3f2-7dc035950a9b" (UID: "502a4051-5a60-4e90-a3f2-7dc035950a9b"). InnerVolumeSpecName "kube-api-access-qbmfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.733537 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.762166 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.785807 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.785838 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.785848 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.886933 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887028 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"10de7f77-2b14-4c56-b4db-ebb93422b89c\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887084 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"10de7f77-2b14-4c56-b4db-ebb93422b89c\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887134 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887180 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"10de7f77-2b14-4c56-b4db-ebb93422b89c\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887205 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.888097 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities" (OuterVolumeSpecName: "utilities") pod "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" (UID: "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.888996 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities" (OuterVolumeSpecName: "utilities") pod "10de7f77-2b14-4c56-b4db-ebb93422b89c" (UID: "10de7f77-2b14-4c56-b4db-ebb93422b89c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.903622 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh" (OuterVolumeSpecName: "kube-api-access-fqcqh") pod "10de7f77-2b14-4c56-b4db-ebb93422b89c" (UID: "10de7f77-2b14-4c56-b4db-ebb93422b89c"). InnerVolumeSpecName "kube-api-access-fqcqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.903674 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp" (OuterVolumeSpecName: "kube-api-access-k8ntp") pod "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" (UID: "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6"). InnerVolumeSpecName "kube-api-access-k8ntp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.917448 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10de7f77-2b14-4c56-b4db-ebb93422b89c" (UID: "10de7f77-2b14-4c56-b4db-ebb93422b89c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.947496 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-htj9r"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988429 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988825 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988839 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988863 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988876 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.026690 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" (UID: "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.062640 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.076178 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.095255 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195744 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"1ba77d4b-0178-4730-8869-389efdf58851\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195804 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"7ebdb343-11c1-4e64-9538-98ca4298b821\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"1ba77d4b-0178-4730-8869-389efdf58851\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"7ebdb343-11c1-4e64-9538-98ca4298b821\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195965 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"1ba77d4b-0178-4730-8869-389efdf58851\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.196006 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"7ebdb343-11c1-4e64-9538-98ca4298b821\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.196684 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities" (OuterVolumeSpecName: "utilities") pod "1ba77d4b-0178-4730-8869-389efdf58851" (UID: "1ba77d4b-0178-4730-8869-389efdf58851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.197457 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities" (OuterVolumeSpecName: "utilities") pod "7ebdb343-11c1-4e64-9538-98ca4298b821" (UID: "7ebdb343-11c1-4e64-9538-98ca4298b821"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.201371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b" (OuterVolumeSpecName: "kube-api-access-m8h6b") pod "1ba77d4b-0178-4730-8869-389efdf58851" (UID: "1ba77d4b-0178-4730-8869-389efdf58851"). InnerVolumeSpecName "kube-api-access-m8h6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.201436 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6" (OuterVolumeSpecName: "kube-api-access-rkgp6") pod "7ebdb343-11c1-4e64-9538-98ca4298b821" (UID: "7ebdb343-11c1-4e64-9538-98ca4298b821"). InnerVolumeSpecName "kube-api-access-rkgp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.257673 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ebdb343-11c1-4e64-9538-98ca4298b821" (UID: "7ebdb343-11c1-4e64-9538-98ca4298b821"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.266975 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ba77d4b-0178-4730-8869-389efdf58851" (UID: "1ba77d4b-0178-4730-8869-389efdf58851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297480 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297549 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297561 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297575 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297609 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297618 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.484248 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"1a440377416e2e3be97cb4385521f0b527fd44fc3d296005eb3a6215b7798a51"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.484324 4725 scope.go:117] "RemoveContainer" containerID="fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.484648 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.486526 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.486509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.488511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerDied","Data":"0b78375c7ed8f9916a58dd59c26f3043217b694c6d335a958edaddd11c21782a"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.488572 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.493346 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" event={"ID":"5666b0dd-5364-4bee-a091-26fa796770cf","Type":"ContainerStarted","Data":"f558ce2a7eb158e666290dd96abad2a7f4f18a12319b0a69da2c71c8c5fcd386"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.493379 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" event={"ID":"5666b0dd-5364-4bee-a091-26fa796770cf","Type":"ContainerStarted","Data":"c1cf4501acbe7dd847f87b6a314b27e5232cecbe5d01451638e4494216fc8638"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.493596 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.496149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"b3c438c94578ed127de08ab71e5b40caf95c66fe2d7a2b37a5e91dfd80db62be"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.496244 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.501836 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.502104 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"fbfff8e8818beecfb8c02cfbcbeb21c81754f2aeda1e021b3b81559a276b8a66"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.502251 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.514232 4725 scope.go:117] "RemoveContainer" containerID="95b3efd0e36287cff3884a1d24955133183f96b36b4ed22b901a472384a7ccb9" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.530837 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" podStartSLOduration=1.5308178369999998 podStartE2EDuration="1.530817837s" podCreationTimestamp="2026-01-20 11:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:11:40.527636537 +0000 UTC m=+428.735958510" watchObservedRunningTime="2026-01-20 11:11:40.530817837 +0000 UTC m=+428.739139810" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.556415 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.557994 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.575133 4725 scope.go:117] "RemoveContainer" containerID="38beb6d6731fbc36ccb21ece2faf5cceb4d8191e98451bfd04d8127368937300" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.577632 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.581529 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.591096 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.595672 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.597527 4725 scope.go:117] "RemoveContainer" containerID="7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.617205 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.626056 4725 scope.go:117] "RemoveContainer" containerID="6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.926035 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.947380 4725 scope.go:117] "RemoveContainer" containerID="9f5ff65ac43718d6c6a2cb0ff08d34aa44b3c5b853c8111fc5672b5c544f3567" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.961899 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" path="/var/lib/kubelet/pods/10de7f77-2b14-4c56-b4db-ebb93422b89c/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.963504 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" path="/var/lib/kubelet/pods/502a4051-5a60-4e90-a3f2-7dc035950a9b/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.966469 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" path="/var/lib/kubelet/pods/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.970207 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" path="/var/lib/kubelet/pods/7ebdb343-11c1-4e64-9538-98ca4298b821/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.971753 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.980241 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.007905 4725 scope.go:117] "RemoveContainer" containerID="4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.071688 4725 scope.go:117] "RemoveContainer" containerID="a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.088134 4725 scope.go:117] "RemoveContainer" containerID="e298ffa53486948221219263d81f91dd0aaf57b63b66a788f8e75324e688da37" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.114548 4725 scope.go:117] "RemoveContainer" containerID="a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.134878 4725 scope.go:117] "RemoveContainer" containerID="a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.157700 4725 scope.go:117] "RemoveContainer" containerID="3aebd70372873b9fbd7b4e02c72fa5025a0936f55bfdb8b39fafb1a0022fe117" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.170858 4725 scope.go:117] "RemoveContainer" containerID="79b3dc2509427f8e48ea65515f6bd240f048253490613646e6daeff65ff41302" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.484582 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hht7w"] Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485165 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485269 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485392 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485458 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485531 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485599 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485672 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485735 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485799 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485870 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485931 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485988 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486044 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486128 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486204 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486267 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486324 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486377 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486440 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486496 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486556 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486610 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486661 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486730 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486798 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486871 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487016 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487112 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487204 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487294 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487375 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.488910 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.492493 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.495484 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hht7w"] Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.632321 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-catalog-content\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.632366 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmzs\" (UniqueName: \"kubernetes.io/projected/2c4020a9-4953-4dee-8bc0-2329493c8b8a-kube-api-access-7hmzs\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.632404 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-utilities\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733301 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-catalog-content\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733360 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmzs\" (UniqueName: \"kubernetes.io/projected/2c4020a9-4953-4dee-8bc0-2329493c8b8a-kube-api-access-7hmzs\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733404 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-utilities\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733828 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-utilities\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.734405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-catalog-content\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.759006 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmzs\" (UniqueName: \"kubernetes.io/projected/2c4020a9-4953-4dee-8bc0-2329493c8b8a-kube-api-access-7hmzs\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.809656 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.271806 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hht7w"] Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.485443 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" containerID="cri-o://8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" gracePeriod=30 Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.523058 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c4020a9-4953-4dee-8bc0-2329493c8b8a" containerID="e8cb2acadf289125fec98b352d3572f1856b247139e042f8f95bfeab691ed4fa" exitCode=0 Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.523210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerDied","Data":"e8cb2acadf289125fec98b352d3572f1856b247139e042f8f95bfeab691ed4fa"} Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.523453 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerStarted","Data":"b3e93984857ebda76d0640c08dbdcc80927d9e3c76e1309aec29f9914b93ba34"} Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.898814 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6dzml"] Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.899933 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.902794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.916898 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.920884 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dzml"] Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.941151 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba77d4b-0178-4730-8869-389efdf58851" path="/var/lib/kubelet/pods/1ba77d4b-0178-4730-8869-389efdf58851/volumes" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052483 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052709 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052769 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052812 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052860 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052925 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052966 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.053134 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-utilities\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.053186 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-catalog-content\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.053229 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwm2\" (UniqueName: \"kubernetes.io/projected/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-kube-api-access-blwm2\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.054552 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.054846 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060013 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb" (OuterVolumeSpecName: "kube-api-access-5nmbb") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "kube-api-access-5nmbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060355 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.063612 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.073218 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.154825 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-utilities\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.154896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-catalog-content\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.154934 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blwm2\" (UniqueName: \"kubernetes.io/projected/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-kube-api-access-blwm2\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155029 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155045 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155057 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155070 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155194 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155205 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155213 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155403 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-catalog-content\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-utilities\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.171956 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blwm2\" (UniqueName: \"kubernetes.io/projected/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-kube-api-access-blwm2\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.230152 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.530968 4725 generic.go:334] "Generic (PLEG): container finished" podID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" exitCode=0 Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.531144 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.531163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerDied","Data":"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b"} Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.532031 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerDied","Data":"ed7560860908ee6c4f83f3490cbdd1843d5adf7ac8051897ed017552b83ca2ee"} Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.532059 4725 scope.go:117] "RemoveContainer" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.567366 4725 scope.go:117] "RemoveContainer" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" Jan 20 11:11:43 crc kubenswrapper[4725]: E0120 11:11:43.569401 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b\": container with ID starting with 8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b not found: ID does not exist" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.569403 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.569453 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b"} err="failed to get container status \"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b\": rpc error: code = NotFound desc = could not find container \"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b\": container with ID starting with 8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b not found: ID does not exist" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.575810 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.629153 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dzml"] Jan 20 11:11:43 crc kubenswrapper[4725]: W0120 11:11:43.632759 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1530fd1_1850_4d4f_b6a7_cc1784d9c399.slice/crio-5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf WatchSource:0}: Error finding container 5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf: Status 404 returned error can't find the container with id 5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.882779 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hm4k5"] Jan 20 11:11:43 crc kubenswrapper[4725]: E0120 11:11:43.883017 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.883035 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.883176 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.883921 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.887920 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.892665 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm4k5"] Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.992188 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-utilities\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.992260 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-catalog-content\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.992335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkb5\" (UniqueName: \"kubernetes.io/projected/da38c2a2-fb87-4115-ac25-0256bee850ae-kube-api-access-qlkb5\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.093689 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-catalog-content\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.093787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlkb5\" (UniqueName: \"kubernetes.io/projected/da38c2a2-fb87-4115-ac25-0256bee850ae-kube-api-access-qlkb5\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.093860 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-utilities\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.094393 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-catalog-content\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.094414 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-utilities\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.113546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlkb5\" (UniqueName: \"kubernetes.io/projected/da38c2a2-fb87-4115-ac25-0256bee850ae-kube-api-access-qlkb5\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.203026 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.542130 4725 generic.go:334] "Generic (PLEG): container finished" podID="e1530fd1-1850-4d4f-b6a7-cc1784d9c399" containerID="f764718ca9a5b6ac659a1d7302281a4f92ac07e802e076380a4f9c3dc2f6a39a" exitCode=0 Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.542512 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerDied","Data":"f764718ca9a5b6ac659a1d7302281a4f92ac07e802e076380a4f9c3dc2f6a39a"} Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.542573 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerStarted","Data":"5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf"} Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.552584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerStarted","Data":"f941a6e2c8f5d7761cdfac57414cafaea4f486589d48240bcfa7b604979a0a9d"} Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.796261 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm4k5"] Jan 20 11:11:44 crc kubenswrapper[4725]: W0120 11:11:44.841698 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda38c2a2_fb87_4115_ac25_0256bee850ae.slice/crio-b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6 WatchSource:0}: Error finding container b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6: Status 404 returned error can't find the container with id b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6 Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.941702 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" path="/var/lib/kubelet/pods/cec62c65-a846-4cc0-bb51-01d2d70c4c85/volumes" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.281995 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.282929 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.284991 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.293012 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.437456 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.437553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.437630 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.539525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.539598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.539667 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.541012 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.541127 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.563312 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.577917 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c4020a9-4953-4dee-8bc0-2329493c8b8a" containerID="f941a6e2c8f5d7761cdfac57414cafaea4f486589d48240bcfa7b604979a0a9d" exitCode=0 Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.578016 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerDied","Data":"f941a6e2c8f5d7761cdfac57414cafaea4f486589d48240bcfa7b604979a0a9d"} Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.585230 4725 generic.go:334] "Generic (PLEG): container finished" podID="da38c2a2-fb87-4115-ac25-0256bee850ae" containerID="f62d075a93b6fe9e16a57eaedd21e95b4746f4b271035e9245ac949b7f419b8c" exitCode=0 Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.585326 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerDied","Data":"f62d075a93b6fe9e16a57eaedd21e95b4746f4b271035e9245ac949b7f419b8c"} Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.585366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerStarted","Data":"b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6"} Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.610261 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.051633 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.596465 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerStarted","Data":"3dd1b4dc4fa2bdc681f5c471e9f8f3bd74508115900d1dbbf1e0bc9f0487534a"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.598024 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" exitCode=0 Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.598182 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.598359 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerStarted","Data":"620e1c951a5a2604e4ce57c3358b1935e7a5f6d46eec1265f136ddf73f1fb079"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.600110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerStarted","Data":"b108a48d976e53e2951586fecb05498b34546b6ee68450d00491a99c445ae608"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.602171 4725 generic.go:334] "Generic (PLEG): container finished" podID="e1530fd1-1850-4d4f-b6a7-cc1784d9c399" containerID="683185dc2054d27f858b64fc845b60cf512ece9e9ae65f544542bd1a27883a18" exitCode=0 Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.602203 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerDied","Data":"683185dc2054d27f858b64fc845b60cf512ece9e9ae65f544542bd1a27883a18"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.625398 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hht7w" podStartSLOduration=2.128617826 podStartE2EDuration="5.625381835s" podCreationTimestamp="2026-01-20 11:11:41 +0000 UTC" firstStartedPulling="2026-01-20 11:11:42.525437921 +0000 UTC m=+430.733759884" lastFinishedPulling="2026-01-20 11:11:46.02220191 +0000 UTC m=+434.230523893" observedRunningTime="2026-01-20 11:11:46.621882057 +0000 UTC m=+434.830204050" watchObservedRunningTime="2026-01-20 11:11:46.625381835 +0000 UTC m=+434.833703808" Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.621272 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerStarted","Data":"caf58d02456c9e340d612bb66dd695db47c6c3ef907e95bbdc47015fdaaac498"} Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.633689 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerDied","Data":"b108a48d976e53e2951586fecb05498b34546b6ee68450d00491a99c445ae608"} Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.635277 4725 generic.go:334] "Generic (PLEG): container finished" podID="da38c2a2-fb87-4115-ac25-0256bee850ae" containerID="b108a48d976e53e2951586fecb05498b34546b6ee68450d00491a99c445ae608" exitCode=0 Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.652880 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6dzml" podStartSLOduration=3.117943335 podStartE2EDuration="5.652860997s" podCreationTimestamp="2026-01-20 11:11:42 +0000 UTC" firstStartedPulling="2026-01-20 11:11:44.545000986 +0000 UTC m=+432.753322969" lastFinishedPulling="2026-01-20 11:11:47.079918658 +0000 UTC m=+435.288240631" observedRunningTime="2026-01-20 11:11:47.648802311 +0000 UTC m=+435.857124314" watchObservedRunningTime="2026-01-20 11:11:47.652860997 +0000 UTC m=+435.861182970" Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.643063 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" exitCode=0 Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.643138 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98"} Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.646488 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerStarted","Data":"5a039b5561beca49ad265852ed78bc72b62a83dbc64c3518fa94ead2a122c7d7"} Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.696810 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hm4k5" podStartSLOduration=3.231847082 podStartE2EDuration="5.696789934s" podCreationTimestamp="2026-01-20 11:11:43 +0000 UTC" firstStartedPulling="2026-01-20 11:11:45.586587169 +0000 UTC m=+433.794909142" lastFinishedPulling="2026-01-20 11:11:48.051530011 +0000 UTC m=+436.259851994" observedRunningTime="2026-01-20 11:11:48.693342017 +0000 UTC m=+436.901663990" watchObservedRunningTime="2026-01-20 11:11:48.696789934 +0000 UTC m=+436.905111907" Jan 20 11:11:50 crc kubenswrapper[4725]: I0120 11:11:50.661279 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerStarted","Data":"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9"} Jan 20 11:11:50 crc kubenswrapper[4725]: I0120 11:11:50.687154 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hz6gm" podStartSLOduration=2.750471904 podStartE2EDuration="5.687138565s" podCreationTimestamp="2026-01-20 11:11:45 +0000 UTC" firstStartedPulling="2026-01-20 11:11:46.599261398 +0000 UTC m=+434.807583371" lastFinishedPulling="2026-01-20 11:11:49.535928059 +0000 UTC m=+437.744250032" observedRunningTime="2026-01-20 11:11:50.682646244 +0000 UTC m=+438.890968207" watchObservedRunningTime="2026-01-20 11:11:50.687138565 +0000 UTC m=+438.895460528" Jan 20 11:11:51 crc kubenswrapper[4725]: I0120 11:11:51.812190 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:51 crc kubenswrapper[4725]: I0120 11:11:51.812266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:52 crc kubenswrapper[4725]: I0120 11:11:52.862168 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hht7w" podUID="2c4020a9-4953-4dee-8bc0-2329493c8b8a" containerName="registry-server" probeResult="failure" output=< Jan 20 11:11:52 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:11:52 crc kubenswrapper[4725]: > Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.231426 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.231484 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.276169 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.738312 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.204177 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.205943 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.249100 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.725728 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.610653 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.610794 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.659994 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.734576 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:12:01 crc kubenswrapper[4725]: I0120 11:12:01.849849 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:12:01 crc kubenswrapper[4725]: I0120 11:12:01.894159 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:13:56 crc kubenswrapper[4725]: I0120 11:13:56.728013 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:13:56 crc kubenswrapper[4725]: I0120 11:13:56.728891 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:14:26 crc kubenswrapper[4725]: I0120 11:14:26.727950 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:14:26 crc kubenswrapper[4725]: I0120 11:14:26.729115 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.727941 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.728623 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.728701 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.729725 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.729906 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b" gracePeriod=600 Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.996343 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b" exitCode=0 Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.996860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b"} Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.996905 4725 scope.go:117] "RemoveContainer" containerID="c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f" Jan 20 11:14:58 crc kubenswrapper[4725]: I0120 11:14:58.008355 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2"} Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.177276 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.178596 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.182022 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.191322 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.195737 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.397576 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.397946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.398001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.499619 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.499693 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.499747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.501261 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.506687 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.518101 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.598103 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.819635 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 11:15:00 crc kubenswrapper[4725]: W0120 11:15:00.829137 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a41df2e_87f8_4dc4_a80c_36bd1bac44aa.slice/crio-be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331 WatchSource:0}: Error finding container be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331: Status 404 returned error can't find the container with id be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331 Jan 20 11:15:01 crc kubenswrapper[4725]: I0120 11:15:01.027669 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerStarted","Data":"df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61"} Jan 20 11:15:01 crc kubenswrapper[4725]: I0120 11:15:01.027727 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerStarted","Data":"be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331"} Jan 20 11:15:01 crc kubenswrapper[4725]: I0120 11:15:01.053841 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" podStartSLOduration=1.053816659 podStartE2EDuration="1.053816659s" podCreationTimestamp="2026-01-20 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:15:01.050201824 +0000 UTC m=+629.258523807" watchObservedRunningTime="2026-01-20 11:15:01.053816659 +0000 UTC m=+629.262138632" Jan 20 11:15:02 crc kubenswrapper[4725]: I0120 11:15:02.036291 4725 generic.go:334] "Generic (PLEG): container finished" podID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerID="df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61" exitCode=0 Jan 20 11:15:02 crc kubenswrapper[4725]: I0120 11:15:02.036357 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerDied","Data":"df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61"} Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.294709 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.445431 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.445634 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.445685 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.447652 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" (UID: "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.453704 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w" (OuterVolumeSpecName: "kube-api-access-c5h9w") pod "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" (UID: "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa"). InnerVolumeSpecName "kube-api-access-c5h9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.456019 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" (UID: "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.547972 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.548034 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") on node \"crc\" DevicePath \"\"" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.548045 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:15:04 crc kubenswrapper[4725]: I0120 11:15:04.055997 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerDied","Data":"be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331"} Jan 20 11:15:04 crc kubenswrapper[4725]: I0120 11:15:04.056105 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:04 crc kubenswrapper[4725]: I0120 11:15:04.056072 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.638450 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639562 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" containerID="cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639659 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" containerID="cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639712 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" containerID="cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639752 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" containerID="cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639769 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.640053 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" containerID="cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.640173 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" containerID="cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.681654 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" containerID="cri-o://3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.833993 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835016 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835075 4725 generic.go:334] "Generic (PLEG): container finished" podID="627f7c97-4173-413f-a90e-e2c5e058c53b" containerID="02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5" exitCode=2 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835222 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerDied","Data":"02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835275 4725 scope.go:117] "RemoveContainer" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835847 4725 scope.go:117] "RemoveContainer" containerID="02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5" Jan 20 11:16:40 crc kubenswrapper[4725]: E0120 11:16:40.836223 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-vchwb_openshift-multus(627f7c97-4173-413f-a90e-e2c5e058c53b)\"" pod="openshift-multus/multus-vchwb" podUID="627f7c97-4173-413f-a90e-e2c5e058c53b" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.841736 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.844817 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-acl-logging/0.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.845409 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-controller/0.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846001 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" exitCode=0 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846034 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" exitCode=0 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846045 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" exitCode=0 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846053 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" exitCode=143 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846061 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" exitCode=143 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846100 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846132 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846143 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846155 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846165 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.971703 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.444240 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-acl-logging/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.445445 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-controller/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.446321 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517593 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qbj7d"] Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517860 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517880 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517894 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerName="collect-profiles" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517901 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerName="collect-profiles" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517907 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kubecfg-setup" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kubecfg-setup" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517924 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517931 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517940 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517946 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517952 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517957 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517966 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517971 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517980 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517986 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517995 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518001 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518009 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518015 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518023 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518029 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518039 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518045 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518053 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518058 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518231 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518269 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518276 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518285 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518293 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518304 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518313 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518320 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerName="collect-profiles" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518327 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518333 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518341 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518451 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518458 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518580 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518773 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.520374 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592683 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592767 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592802 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592818 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592850 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592893 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592912 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593003 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593046 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593098 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593141 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log" (OuterVolumeSpecName: "node-log") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593137 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash" (OuterVolumeSpecName: "host-slash") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593117 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593220 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket" (OuterVolumeSpecName: "log-socket") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593261 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593281 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593311 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593326 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593387 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593395 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593447 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593516 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593587 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593638 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593689 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593625 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593734 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593836 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593875 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594211 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594253 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594277 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594389 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594425 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594454 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594554 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594747 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-etc-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594808 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66c21855-eb77-483d-8eeb-4e8803477516-ovn-node-metrics-cert\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594911 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-config\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594963 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594985 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-log-socket\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595007 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595054 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8dk\" (UniqueName: \"kubernetes.io/projected/66c21855-eb77-483d-8eeb-4e8803477516-kube-api-access-4r8dk\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595122 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-script-lib\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595145 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-systemd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-ovn\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-node-log\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595273 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-systemd-units\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595299 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-slash\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595328 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595350 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-bin\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595377 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-kubelet\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595403 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-var-lib-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595418 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-netd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595434 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-netns\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595452 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-env-overrides\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595503 4725 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595514 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595525 4725 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595537 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595548 4725 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595558 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595568 4725 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595584 4725 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595595 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595604 4725 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595615 4725 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595645 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595654 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595663 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595671 4725 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595681 4725 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595690 4725 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.603699 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k" (OuterVolumeSpecName: "kube-api-access-fsm7k") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "kube-api-access-fsm7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.607245 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.613097 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696874 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-log-socket\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696927 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r8dk\" (UniqueName: \"kubernetes.io/projected/66c21855-eb77-483d-8eeb-4e8803477516-kube-api-access-4r8dk\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696952 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-systemd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696981 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-script-lib\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697001 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-ovn\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-node-log\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-systemd-units\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697031 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697107 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-systemd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-ovn\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697178 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-log-socket\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697069 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-slash\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-slash\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697209 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-node-log\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697242 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-systemd-units\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697255 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697280 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-bin\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697312 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-kubelet\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697329 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-netd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697365 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-var-lib-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697371 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-kubelet\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697384 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-netns\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-bin\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697409 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-env-overrides\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697418 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-netd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697446 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-var-lib-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697449 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-netns\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697549 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-etc-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66c21855-eb77-483d-8eeb-4e8803477516-ovn-node-metrics-cert\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-config\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697712 4725 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697726 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697738 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697972 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-script-lib\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697977 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-etc-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.698198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-env-overrides\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.699293 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-config\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.702626 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66c21855-eb77-483d-8eeb-4e8803477516-ovn-node-metrics-cert\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.716769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r8dk\" (UniqueName: \"kubernetes.io/projected/66c21855-eb77-483d-8eeb-4e8803477516-kube-api-access-4r8dk\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.841293 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.876145 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.882262 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-acl-logging/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.882827 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-controller/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883320 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" exitCode=0 Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883379 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" exitCode=0 Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883387 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" exitCode=0 Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883424 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883498 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883512 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883522 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883553 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883725 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.924817 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.952448 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.958977 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.962317 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.976395 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.004379 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.019110 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.033390 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.049428 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.064219 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.079425 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.079950 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080026 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} err="failed to get container status \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080160 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.080662 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080696 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} err="failed to get container status \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080712 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.081096 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081138 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} err="failed to get container status \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081164 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.081471 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081498 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} err="failed to get container status \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081516 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.081921 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081945 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} err="failed to get container status \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081961 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.082417 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082445 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} err="failed to get container status \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082457 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.082685 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082703 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} err="failed to get container status \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082714 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.083067 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083128 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} err="failed to get container status \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083153 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.083601 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083631 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} err="failed to get container status \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083649 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083933 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} err="failed to get container status \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083979 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084453 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} err="failed to get container status \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084488 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084738 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} err="failed to get container status \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084766 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084990 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} err="failed to get container status \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085014 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085368 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} err="failed to get container status \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085393 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085787 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} err="failed to get container status \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085807 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086040 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} err="failed to get container status \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086057 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086287 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} err="failed to get container status \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086322 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086635 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} err="failed to get container status \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086653 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086864 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} err="failed to get container status \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086881 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087118 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} err="failed to get container status \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087144 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087393 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} err="failed to get container status \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087411 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087780 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} err="failed to get container status \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087796 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088006 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} err="failed to get container status \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088024 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088254 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} err="failed to get container status \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088275 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088555 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} err="failed to get container status \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088573 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088852 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} err="failed to get container status \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088885 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.089272 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} err="failed to get container status \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.891709 4725 generic.go:334] "Generic (PLEG): container finished" podID="66c21855-eb77-483d-8eeb-4e8803477516" containerID="267d0a5dc1f83bfd374c9db2dd9c3173b4e4d0c8fc7cfbe5669976f31cbdf605" exitCode=0 Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.891821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerDied","Data":"267d0a5dc1f83bfd374c9db2dd9c3173b4e4d0c8fc7cfbe5669976f31cbdf605"} Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.891901 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"f0f6941bebb498b63e09e40bc3e98f840083e4422b8a1ad7f981d685553f9263"} Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.945148 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" path="/var/lib/kubelet/pods/9143f3c2-a068-494d-b7e1-4200c04394a3/volumes" Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.906380 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"7900a170afa7c3b38123c28e7d1d7311049655b23f17fc5059d1f3650d6f6121"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907138 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"b4edda44f128780cad5ad58c8de0ddf729304cb662ce97362399ef2f8363b776"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907218 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"e7b2351d5b7b60b5743dd60b52c3542c03af392dee279c87243125eac7aa0e1c"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907275 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"d93ae5e3f743fd3c342fcdefae1e722ceffab74156067d38ca526eef5ef8e84a"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907290 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"715944d8f1c3d0efdba2c59573b071dbeec420d24e3abe8d654adbe2a3a7326a"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907306 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"a0672bc8ee8d903e456de464247c06850d230ad18a283107e77289753c516165"} Jan 20 11:16:46 crc kubenswrapper[4725]: I0120 11:16:46.940143 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"d83cc0768146a80e2dd6582b826ad127a9e1b78f55ac1af98690ef546b30c842"} Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.962737 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"36a1f11c1523d46876acecd0379c70cc20b5e839c5611ddb31ff2304f6bc096a"} Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.964705 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.964784 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.964842 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.993506 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:49 crc kubenswrapper[4725]: I0120 11:16:49.009617 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:49 crc kubenswrapper[4725]: I0120 11:16:49.009673 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" podStartSLOduration=8.009645216 podStartE2EDuration="8.009645216s" podCreationTimestamp="2026-01-20 11:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:16:49.002618565 +0000 UTC m=+737.210940538" watchObservedRunningTime="2026-01-20 11:16:49.009645216 +0000 UTC m=+737.217967189" Jan 20 11:16:53 crc kubenswrapper[4725]: I0120 11:16:53.932875 4725 scope.go:117] "RemoveContainer" containerID="02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5" Jan 20 11:16:55 crc kubenswrapper[4725]: I0120 11:16:55.049870 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:16:55 crc kubenswrapper[4725]: I0120 11:16:55.050369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"0573fb223e7e2b51cbcc09d07e819561bb8d437ed9d4c425afb03dd444701a6b"} Jan 20 11:17:11 crc kubenswrapper[4725]: I0120 11:17:11.866210 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:17:18 crc kubenswrapper[4725]: I0120 11:17:18.250129 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 11:17:26 crc kubenswrapper[4725]: I0120 11:17:26.728037 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:17:26 crc kubenswrapper[4725]: I0120 11:17:26.728729 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.116174 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.120963 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hz6gm" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" containerID="cri-o://77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" gracePeriod=30 Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.570860 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.634818 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"5f5afef1-c036-41b7-a884-72ee03a01ea9\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.636103 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"5f5afef1-c036-41b7-a884-72ee03a01ea9\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.636157 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"5f5afef1-c036-41b7-a884-72ee03a01ea9\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.637666 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities" (OuterVolumeSpecName: "utilities") pod "5f5afef1-c036-41b7-a884-72ee03a01ea9" (UID: "5f5afef1-c036-41b7-a884-72ee03a01ea9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.645558 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2" (OuterVolumeSpecName: "kube-api-access-7wls2") pod "5f5afef1-c036-41b7-a884-72ee03a01ea9" (UID: "5f5afef1-c036-41b7-a884-72ee03a01ea9"). InnerVolumeSpecName "kube-api-access-7wls2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.649989 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" exitCode=0 Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650070 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9"} Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650140 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"620e1c951a5a2604e4ce57c3358b1935e7a5f6d46eec1265f136ddf73f1fb079"} Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650163 4725 scope.go:117] "RemoveContainer" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650354 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.666742 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f5afef1-c036-41b7-a884-72ee03a01ea9" (UID: "5f5afef1-c036-41b7-a884-72ee03a01ea9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.678115 4725 scope.go:117] "RemoveContainer" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.695858 4725 scope.go:117] "RemoveContainer" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.715391 4725 scope.go:117] "RemoveContainer" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" Jan 20 11:17:52 crc kubenswrapper[4725]: E0120 11:17:52.716232 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9\": container with ID starting with 77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9 not found: ID does not exist" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716276 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9"} err="failed to get container status \"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9\": rpc error: code = NotFound desc = could not find container \"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9\": container with ID starting with 77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9 not found: ID does not exist" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716365 4725 scope.go:117] "RemoveContainer" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" Jan 20 11:17:52 crc kubenswrapper[4725]: E0120 11:17:52.716765 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98\": container with ID starting with fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98 not found: ID does not exist" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716794 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98"} err="failed to get container status \"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98\": rpc error: code = NotFound desc = could not find container \"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98\": container with ID starting with fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98 not found: ID does not exist" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716808 4725 scope.go:117] "RemoveContainer" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" Jan 20 11:17:52 crc kubenswrapper[4725]: E0120 11:17:52.717118 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0\": container with ID starting with 510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0 not found: ID does not exist" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.717137 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0"} err="failed to get container status \"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0\": rpc error: code = NotFound desc = could not find container \"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0\": container with ID starting with 510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0 not found: ID does not exist" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.737938 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.737996 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.738012 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") on node \"crc\" DevicePath \"\"" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.980242 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.984927 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:17:54 crc kubenswrapper[4725]: I0120 11:17:54.942397 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" path="/var/lib/kubelet/pods/5f5afef1-c036-41b7-a884-72ee03a01ea9/volumes" Jan 20 11:17:56 crc kubenswrapper[4725]: I0120 11:17:56.727791 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:17:56 crc kubenswrapper[4725]: I0120 11:17:56.728455 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.029827 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm"] Jan 20 11:17:57 crc kubenswrapper[4725]: E0120 11:17:57.030326 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-utilities" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030353 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-utilities" Jan 20 11:17:57 crc kubenswrapper[4725]: E0120 11:17:57.030387 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-content" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030395 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-content" Jan 20 11:17:57 crc kubenswrapper[4725]: E0120 11:17:57.030403 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030411 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030583 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.031754 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.035535 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.054806 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm"] Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.214866 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.214994 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.215070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.316337 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.316463 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.316503 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.317238 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.317238 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.342893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.351551 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.597607 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm"] Jan 20 11:17:57 crc kubenswrapper[4725]: W0120 11:17:57.616400 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod418d6042_ac1e_433e_a820_04d774775787.slice/crio-aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d WatchSource:0}: Error finding container aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d: Status 404 returned error can't find the container with id aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.686363 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerStarted","Data":"aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d"} Jan 20 11:17:58 crc kubenswrapper[4725]: I0120 11:17:58.694926 4725 generic.go:334] "Generic (PLEG): container finished" podID="418d6042-ac1e-433e-a820-04d774775787" containerID="1dc958a87de3cd6c497c14ff1be6f25007b46e4183a208911c83119472655356" exitCode=0 Jan 20 11:17:58 crc kubenswrapper[4725]: I0120 11:17:58.695351 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"1dc958a87de3cd6c497c14ff1be6f25007b46e4183a208911c83119472655356"} Jan 20 11:17:58 crc kubenswrapper[4725]: I0120 11:17:58.697703 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.166025 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.168489 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.193341 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.262228 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.262722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.262780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.363757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.363969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.364017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.364703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.364938 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.389212 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.517200 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.737937 4725 generic.go:334] "Generic (PLEG): container finished" podID="418d6042-ac1e-433e-a820-04d774775787" containerID="7dda32b0ab9711e9a299668d44b89608ce2dd3ed01b455c87737a2b1a6e42351" exitCode=0 Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.738366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"7dda32b0ab9711e9a299668d44b89608ce2dd3ed01b455c87737a2b1a6e42351"} Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.906853 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:00 crc kubenswrapper[4725]: W0120 11:18:00.916724 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbfb8b9_615e_477a_9ab8_112b0c09aa12.slice/crio-9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49 WatchSource:0}: Error finding container 9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49: Status 404 returned error can't find the container with id 9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49 Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.747200 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerID="b28c935c40fb5964b74a8daaace2c11004f108bb7d072c1c2c0d741d5ef699dd" exitCode=0 Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.747290 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"b28c935c40fb5964b74a8daaace2c11004f108bb7d072c1c2c0d741d5ef699dd"} Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.747397 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerStarted","Data":"9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49"} Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.751620 4725 generic.go:334] "Generic (PLEG): container finished" podID="418d6042-ac1e-433e-a820-04d774775787" containerID="575ece06f97dc30c0ba79ac587e8a491b2103baf494e59a2bebe0cde72fa96c4" exitCode=0 Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.751660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"575ece06f97dc30c0ba79ac587e8a491b2103baf494e59a2bebe0cde72fa96c4"} Jan 20 11:18:02 crc kubenswrapper[4725]: I0120 11:18:02.761006 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerStarted","Data":"b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed"} Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.093622 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.214357 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"418d6042-ac1e-433e-a820-04d774775787\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.214448 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"418d6042-ac1e-433e-a820-04d774775787\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.214484 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"418d6042-ac1e-433e-a820-04d774775787\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.217114 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle" (OuterVolumeSpecName: "bundle") pod "418d6042-ac1e-433e-a820-04d774775787" (UID: "418d6042-ac1e-433e-a820-04d774775787"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.218840 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.236124 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9" (OuterVolumeSpecName: "kube-api-access-2tmw9") pod "418d6042-ac1e-433e-a820-04d774775787" (UID: "418d6042-ac1e-433e-a820-04d774775787"). InnerVolumeSpecName "kube-api-access-2tmw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.320004 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.399371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util" (OuterVolumeSpecName: "util") pod "418d6042-ac1e-433e-a820-04d774775787" (UID: "418d6042-ac1e-433e-a820-04d774775787"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.421788 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.773499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d"} Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.773570 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.773634 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:18:04 crc kubenswrapper[4725]: I0120 11:18:04.783217 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerID="b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed" exitCode=0 Jan 20 11:18:04 crc kubenswrapper[4725]: I0120 11:18:04.783309 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed"} Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193416 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk"] Jan 20 11:18:05 crc kubenswrapper[4725]: E0120 11:18:05.193657 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="util" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193671 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="util" Jan 20 11:18:05 crc kubenswrapper[4725]: E0120 11:18:05.193684 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="extract" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193691 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="extract" Jan 20 11:18:05 crc kubenswrapper[4725]: E0120 11:18:05.193708 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="pull" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193715 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="pull" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193824 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="extract" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.195131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.200577 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.210366 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk"] Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.395943 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.396123 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.396183 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.498240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.498327 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.498387 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.500146 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.500195 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.522668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.792267 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerStarted","Data":"b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4"} Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.815227 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6dxvd" podStartSLOduration=2.255387679 podStartE2EDuration="5.815205489s" podCreationTimestamp="2026-01-20 11:18:00 +0000 UTC" firstStartedPulling="2026-01-20 11:18:01.750625081 +0000 UTC m=+809.958947054" lastFinishedPulling="2026-01-20 11:18:05.310442891 +0000 UTC m=+813.518764864" observedRunningTime="2026-01-20 11:18:05.813649319 +0000 UTC m=+814.021971292" watchObservedRunningTime="2026-01-20 11:18:05.815205489 +0000 UTC m=+814.023527462" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.816935 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.043231 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms"] Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.044395 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.066419 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms"] Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.145042 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk"] Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.210227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.210552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.210639 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.311920 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312455 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312739 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312995 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.335709 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.367508 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.716958 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms"] Jan 20 11:18:06 crc kubenswrapper[4725]: W0120 11:18:06.726828 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea19653a_0b47_400b_bcce_8034cb7f6d55.slice/crio-53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b WatchSource:0}: Error finding container 53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b: Status 404 returned error can't find the container with id 53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.801212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerStarted","Data":"53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b"} Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.802949 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"e05bf73be14de89e1588664ae1d96a70523c14053222557ccd985b4afd63f9c2"} Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.802995 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"f83197650a1b5fbe35a37eba7340df1e95f9e5e5cf1734b547d1388f2b52f207"} Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.822466 4725 generic.go:334] "Generic (PLEG): container finished" podID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerID="e05bf73be14de89e1588664ae1d96a70523c14053222557ccd985b4afd63f9c2" exitCode=0 Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.823308 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"e05bf73be14de89e1588664ae1d96a70523c14053222557ccd985b4afd63f9c2"} Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.826807 4725 generic.go:334] "Generic (PLEG): container finished" podID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerID="afcd024f507cdc4dfa0390785e098262bc44374a6f830f19979d6218c6e45d66" exitCode=0 Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.826875 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"afcd024f507cdc4dfa0390785e098262bc44374a6f830f19979d6218c6e45d66"} Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.809967 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.811438 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.850328 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.852230 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.871686 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"2b1f48a6ff2c2ab3ca2ed1d1ab3ea83cb646b049eebfcbda42a7c54067eb83dd"} Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.873316 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.912611 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.912727 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.912783 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.014399 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.014469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.014531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.016189 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.016942 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.118462 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.402141 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.978902 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6dxvd" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" probeResult="failure" output=< Jan 20 11:18:11 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:18:11 crc kubenswrapper[4725]: > Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.987439 4725 generic.go:334] "Generic (PLEG): container finished" podID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerID="2b1f48a6ff2c2ab3ca2ed1d1ab3ea83cb646b049eebfcbda42a7c54067eb83dd" exitCode=0 Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.987518 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"2b1f48a6ff2c2ab3ca2ed1d1ab3ea83cb646b049eebfcbda42a7c54067eb83dd"} Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.101994 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd"] Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.104915 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.131559 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd"] Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.355608 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.355701 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.355783 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.436037 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.457021 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.457121 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.457186 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.458035 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.459210 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.503711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.710582 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.016978 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerStarted","Data":"ef7a2f43f95e56c61116413b01b184fa86e12ae3172ad5e0fede61298f0a6842"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.027419 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"71662f0ef7ef440bca1d87dc2d21ee57905d03942d4ac656bc52786b52bcd3b1"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.035157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.035227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"a51a3e201153e9052123f62f1b87986d749e718183596e10305fe985accf5553"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.364984 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" podStartSLOduration=5.908023232 podStartE2EDuration="8.364959083s" podCreationTimestamp="2026-01-20 11:18:05 +0000 UTC" firstStartedPulling="2026-01-20 11:18:07.826239157 +0000 UTC m=+816.034561150" lastFinishedPulling="2026-01-20 11:18:10.283175028 +0000 UTC m=+818.491497001" observedRunningTime="2026-01-20 11:18:13.361676029 +0000 UTC m=+821.569998012" watchObservedRunningTime="2026-01-20 11:18:13.364959083 +0000 UTC m=+821.573281066" Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.673291 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd"] Jan 20 11:18:13 crc kubenswrapper[4725]: W0120 11:18:13.678883 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10d53364_23ca_4726_bed9_460fb6763fa1.slice/crio-f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10 WatchSource:0}: Error finding container f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10: Status 404 returned error can't find the container with id f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10 Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.192668 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerStarted","Data":"f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10"} Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.194804 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4e296b6-b743-4253-8266-848212ba1001" containerID="e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c" exitCode=0 Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.194855 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c"} Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.205863 4725 generic.go:334] "Generic (PLEG): container finished" podID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerID="ef7a2f43f95e56c61116413b01b184fa86e12ae3172ad5e0fede61298f0a6842" exitCode=0 Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.206722 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"ef7a2f43f95e56c61116413b01b184fa86e12ae3172ad5e0fede61298f0a6842"} Jan 20 11:18:15 crc kubenswrapper[4725]: I0120 11:18:15.370961 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerStarted","Data":"96eea0696ae654e58012771a58a060f675a08683ebec8e9078a27e5e945d55c6"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.434111 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerStarted","Data":"5e227c11d87415c07d800131112fe615e9dec133403066a5b7e1a417c675b996"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.436672 4725 generic.go:334] "Generic (PLEG): container finished" podID="10d53364-23ca-4726-bed9-460fb6763fa1" containerID="96eea0696ae654e58012771a58a060f675a08683ebec8e9078a27e5e945d55c6" exitCode=0 Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.436766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"96eea0696ae654e58012771a58a060f675a08683ebec8e9078a27e5e945d55c6"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.439803 4725 generic.go:334] "Generic (PLEG): container finished" podID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerID="71662f0ef7ef440bca1d87dc2d21ee57905d03942d4ac656bc52786b52bcd3b1" exitCode=0 Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.439881 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"71662f0ef7ef440bca1d87dc2d21ee57905d03942d4ac656bc52786b52bcd3b1"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.565448 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" podStartSLOduration=6.643830703 podStartE2EDuration="10.565430222s" podCreationTimestamp="2026-01-20 11:18:06 +0000 UTC" firstStartedPulling="2026-01-20 11:18:07.830804302 +0000 UTC m=+816.039126275" lastFinishedPulling="2026-01-20 11:18:11.752403821 +0000 UTC m=+819.960725794" observedRunningTime="2026-01-20 11:18:16.562626593 +0000 UTC m=+824.770948576" watchObservedRunningTime="2026-01-20 11:18:16.565430222 +0000 UTC m=+824.773752185" Jan 20 11:18:17 crc kubenswrapper[4725]: I0120 11:18:17.447908 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546"} Jan 20 11:18:17 crc kubenswrapper[4725]: I0120 11:18:17.450842 4725 generic.go:334] "Generic (PLEG): container finished" podID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerID="5e227c11d87415c07d800131112fe615e9dec133403066a5b7e1a417c675b996" exitCode=0 Jan 20 11:18:17 crc kubenswrapper[4725]: I0120 11:18:17.450911 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"5e227c11d87415c07d800131112fe615e9dec133403066a5b7e1a417c675b996"} Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.773672 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.924695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"484dd827-7fd5-4cbc-878f-400b31b6179c\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.924754 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"484dd827-7fd5-4cbc-878f-400b31b6179c\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.924826 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"484dd827-7fd5-4cbc-878f-400b31b6179c\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.942285 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle" (OuterVolumeSpecName: "bundle") pod "484dd827-7fd5-4cbc-878f-400b31b6179c" (UID: "484dd827-7fd5-4cbc-878f-400b31b6179c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.952282 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq" (OuterVolumeSpecName: "kube-api-access-52thq") pod "484dd827-7fd5-4cbc-878f-400b31b6179c" (UID: "484dd827-7fd5-4cbc-878f-400b31b6179c"). InnerVolumeSpecName "kube-api-access-52thq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.969314 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util" (OuterVolumeSpecName: "util") pod "484dd827-7fd5-4cbc-878f-400b31b6179c" (UID: "484dd827-7fd5-4cbc-878f-400b31b6179c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.026881 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.026939 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.026951 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.286384 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.334118 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"ea19653a-0b47-400b-bcce-8034cb7f6d55\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.334175 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"ea19653a-0b47-400b-bcce-8034cb7f6d55\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.335779 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle" (OuterVolumeSpecName: "bundle") pod "ea19653a-0b47-400b-bcce-8034cb7f6d55" (UID: "ea19653a-0b47-400b-bcce-8034cb7f6d55"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.337560 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2" (OuterVolumeSpecName: "kube-api-access-zflz2") pod "ea19653a-0b47-400b-bcce-8034cb7f6d55" (UID: "ea19653a-0b47-400b-bcce-8034cb7f6d55"). InnerVolumeSpecName "kube-api-access-zflz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.459945 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"ea19653a-0b47-400b-bcce-8034cb7f6d55\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.460275 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.460290 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.495445 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util" (OuterVolumeSpecName: "util") pod "ea19653a-0b47-400b-bcce-8034cb7f6d55" (UID: "ea19653a-0b47-400b-bcce-8034cb7f6d55"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.562168 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.699966 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4e296b6-b743-4253-8266-848212ba1001" containerID="c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546" exitCode=0 Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.700109 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546"} Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.733851 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b"} Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.733929 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.733869 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.782795 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"f83197650a1b5fbe35a37eba7340df1e95f9e5e5cf1734b547d1388f2b52f207"} Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.782864 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83197650a1b5fbe35a37eba7340df1e95f9e5e5cf1734b547d1388f2b52f207" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.783000 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:20 crc kubenswrapper[4725]: I0120 11:18:20.648736 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:20 crc kubenswrapper[4725]: I0120 11:18:20.715914 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:20 crc kubenswrapper[4725]: I0120 11:18:20.799524 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61"} Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189284 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg"] Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189572 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189598 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189610 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189616 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189627 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189633 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189640 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189648 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189656 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189662 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189673 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189679 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189852 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189876 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.190420 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.192630 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.192783 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-r68zl" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.193640 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.214597 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.296892 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x66r\" (UniqueName: \"kubernetes.io/projected/0bc9f0db-ee2d-43d3-8fc7-66f2b155c710-kube-api-access-8x66r\") pod \"obo-prometheus-operator-68bc856cb9-sl5rg\" (UID: \"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.338955 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.339773 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.357434 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.357487 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2qzm8" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.372800 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.373647 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.397719 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x66r\" (UniqueName: \"kubernetes.io/projected/0bc9f0db-ee2d-43d3-8fc7-66f2b155c710-kube-api-access-8x66r\") pod \"obo-prometheus-operator-68bc856cb9-sl5rg\" (UID: \"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.397780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.397815 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.401407 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.432635 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.447384 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x66r\" (UniqueName: \"kubernetes.io/projected/0bc9f0db-ee2d-43d3-8fc7-66f2b155c710-kube-api-access-8x66r\") pod \"obo-prometheus-operator-68bc856cb9-sl5rg\" (UID: \"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.535097 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536259 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536613 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.547934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.563448 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.638534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.638621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.640835 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cjnzp"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.644707 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.650760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.657436 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.666388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.670537 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-jpqnf" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.670850 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.685366 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cjnzp"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.693442 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.748099 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5czqx\" (UniqueName: \"kubernetes.io/projected/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-kube-api-access-5czqx\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.748176 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.849675 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.849793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5czqx\" (UniqueName: \"kubernetes.io/projected/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-kube-api-access-5czqx\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.855019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.882796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5czqx\" (UniqueName: \"kubernetes.io/projected/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-kube-api-access-5czqx\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.900284 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckz5m"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.901131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.911775 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zcpfd" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.917943 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckz5m"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.986066 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vvv86" podStartSLOduration=5.724848405 podStartE2EDuration="11.986035453s" podCreationTimestamp="2026-01-20 11:18:10 +0000 UTC" firstStartedPulling="2026-01-20 11:18:14.197348534 +0000 UTC m=+822.405670507" lastFinishedPulling="2026-01-20 11:18:20.458535582 +0000 UTC m=+828.666857555" observedRunningTime="2026-01-20 11:18:21.978603168 +0000 UTC m=+830.186925161" watchObservedRunningTime="2026-01-20 11:18:21.986035453 +0000 UTC m=+830.194357426" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.994957 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.070208 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.070294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nhp\" (UniqueName: \"kubernetes.io/projected/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-kube-api-access-h5nhp\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.172346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.172895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5nhp\" (UniqueName: \"kubernetes.io/projected/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-kube-api-access-h5nhp\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.174754 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.204854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5nhp\" (UniqueName: \"kubernetes.io/projected/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-kube-api-access-h5nhp\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.233489 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.846553 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5"] Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.033632 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg"] Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.063953 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b"] Jan 20 11:18:23 crc kubenswrapper[4725]: W0120 11:18:23.103242 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05acb89f_79ef_4e5a_8713_af3abbf86d5a.slice/crio-032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea WatchSource:0}: Error finding container 032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea: Status 404 returned error can't find the container with id 032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.109866 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cjnzp"] Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.311789 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckz5m"] Jan 20 11:18:23 crc kubenswrapper[4725]: W0120 11:18:23.316438 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a2dcc7a_6d62_412d_a25f_fea592c85bf5.slice/crio-7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804 WatchSource:0}: Error finding container 7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804: Status 404 returned error can't find the container with id 7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804 Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.972233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" event={"ID":"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002","Type":"ContainerStarted","Data":"c1de0eb75d154e63b4916dc1e6f8f88ec95285377d50f04f6e80570b1fbf778b"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.973609 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" event={"ID":"05acb89f-79ef-4e5a-8713-af3abbf86d5a","Type":"ContainerStarted","Data":"032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.974402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" event={"ID":"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710","Type":"ContainerStarted","Data":"fb06f071a9a039e6d50f013cf80952e3e708fd47a93aa93f0cc92f67e516a839"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.975210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" event={"ID":"5a2dcc7a-6d62-412d-a25f-fea592c85bf5","Type":"ContainerStarted","Data":"7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.976229 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" event={"ID":"a5d78053-6a08-448a-93ca-1c0e2334617a","Type":"ContainerStarted","Data":"885a97651c7e0433abf095423f0e90eff0d1ae1198320ffd0e551b5d406aa354"} Jan 20 11:18:25 crc kubenswrapper[4725]: I0120 11:18:25.146713 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:25 crc kubenswrapper[4725]: I0120 11:18:25.147221 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6dxvd" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" containerID="cri-o://b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4" gracePeriod=2 Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.092904 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6886c99b94-tzbc7"] Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.094183 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.099448 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.099787 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.099927 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.101289 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerID="b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4" exitCode=0 Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.101337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4"} Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.102964 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-mh884" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.195723 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxxn\" (UniqueName: \"kubernetes.io/projected/ce11e344-b219-4b22-b05b-a21b78fc7d98-kube-api-access-ntxxn\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.196097 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-webhook-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.196211 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-apiservice-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.298260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxxn\" (UniqueName: \"kubernetes.io/projected/ce11e344-b219-4b22-b05b-a21b78fc7d98-kube-api-access-ntxxn\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.298400 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-webhook-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.298437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-apiservice-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.306445 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-apiservice-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.320528 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-webhook-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.441241 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxxn\" (UniqueName: \"kubernetes.io/projected/ce11e344-b219-4b22-b05b-a21b78fc7d98-kube-api-access-ntxxn\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.480383 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6886c99b94-tzbc7"] Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.716283 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.730141 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.730238 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.730300 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.731482 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.731553 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2" gracePeriod=600 Jan 20 11:18:27 crc kubenswrapper[4725]: I0120 11:18:27.121852 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2" exitCode=0 Jan 20 11:18:27 crc kubenswrapper[4725]: I0120 11:18:27.121930 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2"} Jan 20 11:18:27 crc kubenswrapper[4725]: I0120 11:18:27.121982 4725 scope.go:117] "RemoveContainer" containerID="76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.571880 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.733047 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.733331 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.733467 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.737445 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities" (OuterVolumeSpecName: "utilities") pod "dfbfb8b9-615e-477a-9ab8-112b0c09aa12" (UID: "dfbfb8b9-615e-477a-9ab8-112b0c09aa12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.748589 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49" (OuterVolumeSpecName: "kube-api-access-nrm49") pod "dfbfb8b9-615e-477a-9ab8-112b0c09aa12" (UID: "dfbfb8b9-615e-477a-9ab8-112b0c09aa12"). InnerVolumeSpecName "kube-api-access-nrm49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.838794 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.839484 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.914946 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfbfb8b9-615e-477a-9ab8-112b0c09aa12" (UID: "dfbfb8b9-615e-477a-9ab8-112b0c09aa12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.939972 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.981174 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6886c99b94-tzbc7"] Jan 20 11:18:28 crc kubenswrapper[4725]: W0120 11:18:28.995295 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce11e344_b219_4b22_b05b_a21b78fc7d98.slice/crio-bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9 WatchSource:0}: Error finding container bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9: Status 404 returned error can't find the container with id bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9 Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.167177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" event={"ID":"ce11e344-b219-4b22-b05b-a21b78fc7d98","Type":"ContainerStarted","Data":"bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.175459 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.175535 4725 scope.go:117] "RemoveContainer" containerID="b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.175684 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.186639 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerStarted","Data":"18f46d6d120071cafa0d0486418f2f1a267e6e4ccb6923aa5ce9fdea31b10509"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.200983 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.202850 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.215108 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.233998 4725 scope.go:117] "RemoveContainer" containerID="b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.355558 4725 scope.go:117] "RemoveContainer" containerID="b28c935c40fb5964b74a8daaace2c11004f108bb7d072c1c2c0d741d5ef699dd" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.642925 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7p9dr"] Jan 20 11:18:29 crc kubenswrapper[4725]: E0120 11:18:29.643748 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643769 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" Jan 20 11:18:29 crc kubenswrapper[4725]: E0120 11:18:29.643787 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-content" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643795 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-content" Jan 20 11:18:29 crc kubenswrapper[4725]: E0120 11:18:29.643818 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-utilities" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643826 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-utilities" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643950 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.644854 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.647673 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-q4m8g" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.657569 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7p9dr"] Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.758260 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt87\" (UniqueName: \"kubernetes.io/projected/a923dc59-d518-4ee4-a92c-1bb5ad6e7158-kube-api-access-9lt87\") pod \"interconnect-operator-5bb49f789d-7p9dr\" (UID: \"a923dc59-d518-4ee4-a92c-1bb5ad6e7158\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.860308 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt87\" (UniqueName: \"kubernetes.io/projected/a923dc59-d518-4ee4-a92c-1bb5ad6e7158-kube-api-access-9lt87\") pod \"interconnect-operator-5bb49f789d-7p9dr\" (UID: \"a923dc59-d518-4ee4-a92c-1bb5ad6e7158\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.885205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt87\" (UniqueName: \"kubernetes.io/projected/a923dc59-d518-4ee4-a92c-1bb5ad6e7158-kube-api-access-9lt87\") pod \"interconnect-operator-5bb49f789d-7p9dr\" (UID: \"a923dc59-d518-4ee4-a92c-1bb5ad6e7158\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.972452 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.287378 4725 generic.go:334] "Generic (PLEG): container finished" podID="10d53364-23ca-4726-bed9-460fb6763fa1" containerID="18f46d6d120071cafa0d0486418f2f1a267e6e4ccb6923aa5ce9fdea31b10509" exitCode=0 Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.287572 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"18f46d6d120071cafa0d0486418f2f1a267e6e4ccb6923aa5ce9fdea31b10509"} Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.427852 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7p9dr"] Jan 20 11:18:30 crc kubenswrapper[4725]: W0120 11:18:30.449699 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda923dc59_d518_4ee4_a92c_1bb5ad6e7158.slice/crio-253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841 WatchSource:0}: Error finding container 253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841: Status 404 returned error can't find the container with id 253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841 Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.953202 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" path="/var/lib/kubelet/pods/dfbfb8b9-615e-477a-9ab8-112b0c09aa12/volumes" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.155721 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.156040 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.230253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" event={"ID":"a923dc59-d518-4ee4-a92c-1bb5ad6e7158","Type":"ContainerStarted","Data":"253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841"} Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.262257 4725 generic.go:334] "Generic (PLEG): container finished" podID="10d53364-23ca-4726-bed9-460fb6763fa1" containerID="523aace2da8268f02b1c1009bb3b3093590c510c65568a8b4238b8dfa2bb2bed" exitCode=0 Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.262332 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"523aace2da8268f02b1c1009bb3b3093590c510c65568a8b4238b8dfa2bb2bed"} Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.292343 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.393175 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:36 crc kubenswrapper[4725]: I0120 11:18:36.151545 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:36 crc kubenswrapper[4725]: I0120 11:18:36.155681 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vvv86" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" containerID="cri-o://ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" gracePeriod=2 Jan 20 11:18:37 crc kubenswrapper[4725]: I0120 11:18:37.314201 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4e296b6-b743-4253-8266-848212ba1001" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" exitCode=0 Jan 20 11:18:37 crc kubenswrapper[4725]: I0120 11:18:37.314770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61"} Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.403974 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.404996 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.405491 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.405535 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-vvv86" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:47.867379 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.033251 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"10d53364-23ca-4726-bed9-460fb6763fa1\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.033777 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"10d53364-23ca-4726-bed9-460fb6763fa1\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.033878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"10d53364-23ca-4726-bed9-460fb6763fa1\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.034919 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle" (OuterVolumeSpecName: "bundle") pod "10d53364-23ca-4726-bed9-460fb6763fa1" (UID: "10d53364-23ca-4726-bed9-460fb6763fa1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.051999 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr" (OuterVolumeSpecName: "kube-api-access-tr6hr") pod "10d53364-23ca-4726-bed9-460fb6763fa1" (UID: "10d53364-23ca-4726-bed9-460fb6763fa1"). InnerVolumeSpecName "kube-api-access-tr6hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.057897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util" (OuterVolumeSpecName: "util") pod "10d53364-23ca-4726-bed9-460fb6763fa1" (UID: "10d53364-23ca-4726-bed9-460fb6763fa1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.137373 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.137473 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.137487 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.421326 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10"} Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.421632 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.421445 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.138756 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.139306 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_openshift-operators(a5d78053-6a08-448a-93ca-1c0e2334617a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.143455 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" podUID="a5d78053-6a08-448a-93ca-1c0e2334617a" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.149324 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.149552 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_openshift-operators(05acb89f-79ef-4e5a-8713-af3abbf86d5a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.150852 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" podUID="05acb89f-79ef-4e5a-8713-af3abbf86d5a" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.432435 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" podUID="a5d78053-6a08-448a-93ca-1c0e2334617a" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.432894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" podUID="05acb89f-79ef-4e5a-8713-af3abbf86d5a" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.342892 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.343136 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8x66r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-sl5rg_openshift-operators(0bc9f0db-ee2d-43d3-8fc7-66f2b155c710): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.344626 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" podUID="0bc9f0db-ee2d-43d3-8fc7-66f2b155c710" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.437174 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" podUID="0bc9f0db-ee2d-43d3-8fc7-66f2b155c710" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.404394 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.406154 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.406960 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.407057 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-vvv86" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.471047 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.471353 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105,Command:[],Args:[manager --config=/conf/eck.yaml --manage-webhook-certs=false --enable-webhook --ubi-only --distribution-channel=certified-operators],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https-webhook,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACES,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.operatorNamespace'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_IMAGE,Value:registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:elasticsearch-eck-operator-certified.v3.2.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{1073741824 0} {} 1Gi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntxxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elastic-operator-6886c99b94-tzbc7_service-telemetry(ce11e344-b219-4b22-b05b-a21b78fc7d98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.472941 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" podUID="ce11e344-b219-4b22-b05b-a21b78fc7d98" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.528544 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.610418 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"d4e296b6-b743-4253-8266-848212ba1001\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.610540 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"d4e296b6-b743-4253-8266-848212ba1001\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.610626 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"d4e296b6-b743-4253-8266-848212ba1001\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.611950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities" (OuterVolumeSpecName: "utilities") pod "d4e296b6-b743-4253-8266-848212ba1001" (UID: "d4e296b6-b743-4253-8266-848212ba1001"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.626972 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr" (OuterVolumeSpecName: "kube-api-access-dnsmr") pod "d4e296b6-b743-4253-8266-848212ba1001" (UID: "d4e296b6-b743-4253-8266-848212ba1001"). InnerVolumeSpecName "kube-api-access-dnsmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.671055 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4e296b6-b743-4253-8266-848212ba1001" (UID: "d4e296b6-b743-4253-8266-848212ba1001"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.712793 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.712869 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.712884 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.452650 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.452900 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"a51a3e201153e9052123f62f1b87986d749e718183596e10305fe985accf5553"} Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.453593 4725 scope.go:117] "RemoveContainer" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" Jan 20 11:18:52 crc kubenswrapper[4725]: E0120 11:18:52.454773 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105\\\"\"" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" podUID="ce11e344-b219-4b22-b05b-a21b78fc7d98" Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.500947 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.506157 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.942065 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e296b6-b743-4253-8266-848212ba1001" path="/var/lib/kubelet/pods/d4e296b6-b743-4253-8266-848212ba1001/volumes" Jan 20 11:18:56 crc kubenswrapper[4725]: I0120 11:18:56.340464 4725 scope.go:117] "RemoveContainer" containerID="c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546" Jan 20 11:18:56 crc kubenswrapper[4725]: I0120 11:18:56.411444 4725 scope.go:117] "RemoveContainer" containerID="e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.527480 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" event={"ID":"5a2dcc7a-6d62-412d-a25f-fea592c85bf5","Type":"ContainerStarted","Data":"d7981d56f83107dfdb67a66ae08dc92b86b0b5a09c0b8adfa83ebbd2415fbb0a"} Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.529174 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.530482 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" event={"ID":"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002","Type":"ContainerStarted","Data":"42c5f7cedac5395ba98a70b66fb37997f02d2baf15a657dc7e86f3801eddfed6"} Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.530820 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.533092 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" event={"ID":"a923dc59-d518-4ee4-a92c-1bb5ad6e7158","Type":"ContainerStarted","Data":"76c8059c9ce0bac718250baa31b7abd576df85323cff90d10ef2a3ccca079460"} Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.563968 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" podStartSLOduration=3.995931246 podStartE2EDuration="36.56394375s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.31937384 +0000 UTC m=+831.527695813" lastFinishedPulling="2026-01-20 11:18:55.887386344 +0000 UTC m=+864.095708317" observedRunningTime="2026-01-20 11:18:57.55725283 +0000 UTC m=+865.765574813" watchObservedRunningTime="2026-01-20 11:18:57.56394375 +0000 UTC m=+865.772265723" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.605522 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" podStartSLOduration=3.9033286609999998 podStartE2EDuration="36.605507306s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.18648262 +0000 UTC m=+831.394804593" lastFinishedPulling="2026-01-20 11:18:55.888661255 +0000 UTC m=+864.096983238" observedRunningTime="2026-01-20 11:18:57.60148711 +0000 UTC m=+865.809809083" watchObservedRunningTime="2026-01-20 11:18:57.605507306 +0000 UTC m=+865.813829279" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.628027 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" podStartSLOduration=2.649023137 podStartE2EDuration="28.627994912s" podCreationTimestamp="2026-01-20 11:18:29 +0000 UTC" firstStartedPulling="2026-01-20 11:18:30.462553763 +0000 UTC m=+838.670875726" lastFinishedPulling="2026-01-20 11:18:56.441525528 +0000 UTC m=+864.649847501" observedRunningTime="2026-01-20 11:18:57.623152991 +0000 UTC m=+865.831474984" watchObservedRunningTime="2026-01-20 11:18:57.627994912 +0000 UTC m=+865.836316885" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.884473 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209100 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k"] Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209681 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="pull" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209696 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="pull" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209710 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209718 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209728 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="util" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209735 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="util" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209744 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-content" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209751 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-content" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209766 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-utilities" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209772 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-utilities" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209788 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="extract" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209795 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="extract" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209929 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209945 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="extract" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.210564 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.214430 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.214878 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.219945 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-6z2qj" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.229826 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k"] Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.391276 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r89m\" (UniqueName: \"kubernetes.io/projected/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-kube-api-access-5r89m\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.391340 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.492348 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.492464 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r89m\" (UniqueName: \"kubernetes.io/projected/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-kube-api-access-5r89m\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.492878 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.521620 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r89m\" (UniqueName: \"kubernetes.io/projected/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-kube-api-access-5r89m\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.532577 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:02 crc kubenswrapper[4725]: I0120 11:19:02.279065 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:19:02 crc kubenswrapper[4725]: I0120 11:19:02.356464 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k"] Jan 20 11:19:02 crc kubenswrapper[4725]: W0120 11:19:02.364485 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07b8a4cd_9f0f_405a_a03d_749bdd01dcce.slice/crio-3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9 WatchSource:0}: Error finding container 3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9: Status 404 returned error can't find the container with id 3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9 Jan 20 11:19:02 crc kubenswrapper[4725]: I0120 11:19:02.775106 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" event={"ID":"07b8a4cd-9f0f-405a-a03d-749bdd01dcce","Type":"ContainerStarted","Data":"3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.800539 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" event={"ID":"a5d78053-6a08-448a-93ca-1c0e2334617a","Type":"ContainerStarted","Data":"d0e1739a0253cf18b9a53d0437dcfd1486c75bf1be5683ebaf6a85995537d336"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.811788 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" event={"ID":"05acb89f-79ef-4e5a-8713-af3abbf86d5a","Type":"ContainerStarted","Data":"0c71ce19be34f4c5a0d39e505dd140cfcbce930abea4e67c4cec87def815ed1e"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.816852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" event={"ID":"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710","Type":"ContainerStarted","Data":"cc1009caa3f66e9dff968b62d354661b246a24d9b9b0d93229615ddb79b5e678"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.835884 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" podStartSLOduration=3.130247287 podStartE2EDuration="43.835864701s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:22.995953941 +0000 UTC m=+831.204275914" lastFinishedPulling="2026-01-20 11:19:03.701571355 +0000 UTC m=+871.909893328" observedRunningTime="2026-01-20 11:19:04.831882475 +0000 UTC m=+873.040204458" watchObservedRunningTime="2026-01-20 11:19:04.835864701 +0000 UTC m=+873.044186674" Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.859576 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" podStartSLOduration=2.6908189609999997 podStartE2EDuration="43.859548975s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.028988923 +0000 UTC m=+831.237310896" lastFinishedPulling="2026-01-20 11:19:04.197718937 +0000 UTC m=+872.406040910" observedRunningTime="2026-01-20 11:19:04.85430965 +0000 UTC m=+873.062631683" watchObservedRunningTime="2026-01-20 11:19:04.859548975 +0000 UTC m=+873.067870948" Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.877167 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" podStartSLOduration=-9223371992.977663 podStartE2EDuration="43.877113157s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.144764364 +0000 UTC m=+831.353086337" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:19:04.876534858 +0000 UTC m=+873.084856831" watchObservedRunningTime="2026-01-20 11:19:04.877113157 +0000 UTC m=+873.085435120" Jan 20 11:19:17 crc kubenswrapper[4725]: E0120 11:19:17.227204 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Jan 20 11:19:17 crc kubenswrapper[4725]: E0120 11:19:17.227933 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r89m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-5446d6888b-8p62k_cert-manager-operator(07b8a4cd-9f0f-405a-a03d-749bdd01dcce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:19:17 crc kubenswrapper[4725]: E0120 11:19:17.229269 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" podUID="07b8a4cd-9f0f-405a-a03d-749bdd01dcce" Jan 20 11:19:18 crc kubenswrapper[4725]: I0120 11:19:18.082808 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" event={"ID":"ce11e344-b219-4b22-b05b-a21b78fc7d98","Type":"ContainerStarted","Data":"7b969b07c35fae20dba239a302f881f99dec25f23bc169ffb8329a5a827a4ddd"} Jan 20 11:19:18 crc kubenswrapper[4725]: E0120 11:19:18.085684 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" podUID="07b8a4cd-9f0f-405a-a03d-749bdd01dcce" Jan 20 11:19:18 crc kubenswrapper[4725]: I0120 11:19:18.132910 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" podStartSLOduration=3.894345487 podStartE2EDuration="52.132885531s" podCreationTimestamp="2026-01-20 11:18:26 +0000 UTC" firstStartedPulling="2026-01-20 11:18:28.99899247 +0000 UTC m=+837.207314443" lastFinishedPulling="2026-01-20 11:19:17.237532514 +0000 UTC m=+885.445854487" observedRunningTime="2026-01-20 11:19:18.127816341 +0000 UTC m=+886.336138324" watchObservedRunningTime="2026-01-20 11:19:18.132885531 +0000 UTC m=+886.341207514" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.135039 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.136707 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.207878 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.208670 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.208788 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209004 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209159 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209319 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209526 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209738 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209801 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209855 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209908 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209924 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209960 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.216736 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-rndtg" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217044 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217281 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217416 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217545 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217694 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217814 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.218657 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.235676 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.311950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312019 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312068 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312123 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312150 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312177 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312203 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312254 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312302 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312321 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312339 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312365 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.313339 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.313689 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.313862 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.314068 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.314295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.315504 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.315729 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.323285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.335620 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.336534 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.336577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.336599 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.338702 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.339784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.361968 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.542399 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:20 crc kubenswrapper[4725]: W0120 11:19:20.048607 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf12e47b3_54a1_4f6b_8e7a_0dc9f25358f6.slice/crio-5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39 WatchSource:0}: Error finding container 5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39: Status 404 returned error can't find the container with id 5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39 Jan 20 11:19:20 crc kubenswrapper[4725]: I0120 11:19:20.050919 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:20 crc kubenswrapper[4725]: I0120 11:19:20.098713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerStarted","Data":"5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39"} Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.523024 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" event={"ID":"07b8a4cd-9f0f-405a-a03d-749bdd01dcce","Type":"ContainerStarted","Data":"5f4f37aa9d44600fa14fc73b7a7443feb6748c56330307de679b17b3a3da6422"} Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.524205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerStarted","Data":"bf9180dab5339ddb58dac81d3d95278af49bc678c69d2ccffd4b22bef1b300a5"} Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.583553 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" podStartSLOduration=2.218155625 podStartE2EDuration="39.583530012s" podCreationTimestamp="2026-01-20 11:19:01 +0000 UTC" firstStartedPulling="2026-01-20 11:19:02.36869349 +0000 UTC m=+870.577015463" lastFinishedPulling="2026-01-20 11:19:39.734067877 +0000 UTC m=+907.942389850" observedRunningTime="2026-01-20 11:19:40.57997493 +0000 UTC m=+908.788296923" watchObservedRunningTime="2026-01-20 11:19:40.583530012 +0000 UTC m=+908.791851995" Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.777068 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.813015 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:42 crc kubenswrapper[4725]: I0120 11:19:42.544453 4725 generic.go:334] "Generic (PLEG): container finished" podID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerID="bf9180dab5339ddb58dac81d3d95278af49bc678c69d2ccffd4b22bef1b300a5" exitCode=0 Jan 20 11:19:42 crc kubenswrapper[4725]: I0120 11:19:42.544527 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerDied","Data":"bf9180dab5339ddb58dac81d3d95278af49bc678c69d2ccffd4b22bef1b300a5"} Jan 20 11:19:44 crc kubenswrapper[4725]: I0120 11:19:44.569931 4725 generic.go:334] "Generic (PLEG): container finished" podID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerID="a53abbcf725fa8441e49ba86debf3440670cc01b8f475a056b593122277e60f4" exitCode=0 Jan 20 11:19:44 crc kubenswrapper[4725]: I0120 11:19:44.570566 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerDied","Data":"a53abbcf725fa8441e49ba86debf3440670cc01b8f475a056b593122277e60f4"} Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.011817 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bxlks"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.013038 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.015189 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.015258 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.015484 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2dflb" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.034159 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bxlks"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.073762 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.075344 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.077270 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079206 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079386 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079256 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079503 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079621 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079680 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079763 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079797 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079821 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncq9\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-kube-api-access-bncq9\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079846 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079864 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079885 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079923 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079942 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079958 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.088826 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181792 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181923 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181962 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182013 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bncq9\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-kube-api-access-bncq9\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182907 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183304 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182270 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183458 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183647 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183773 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183833 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.184324 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.184539 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.184594 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.188780 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.189586 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.201769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.204473 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.206393 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bncq9\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-kube-api-access-bncq9\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.329567 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.428956 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.582774 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerStarted","Data":"1c2f4b5de3a927025d64961d3fe81e9e36e4eda258298a00eeafcb2e26c4c7b8"} Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.583882 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.690130 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.742023269 podStartE2EDuration="26.690107586s" podCreationTimestamp="2026-01-20 11:19:19 +0000 UTC" firstStartedPulling="2026-01-20 11:19:20.05106986 +0000 UTC m=+888.259391833" lastFinishedPulling="2026-01-20 11:19:39.999154187 +0000 UTC m=+908.207476150" observedRunningTime="2026-01-20 11:19:45.686288146 +0000 UTC m=+913.894610119" watchObservedRunningTime="2026-01-20 11:19:45.690107586 +0000 UTC m=+913.898429559" Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.001806 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bxlks"] Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.091538 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:46 crc kubenswrapper[4725]: W0120 11:19:46.121125 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61bedcc7_14db_4cb4_b3df_04733ce92bb2.slice/crio-bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2 WatchSource:0}: Error finding container bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2: Status 404 returned error can't find the container with id bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2 Jan 20 11:19:46 crc kubenswrapper[4725]: W0120 11:19:46.135308 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b639e20_8ca7_4b37_8271_ada2858140b9.slice/crio-ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef WatchSource:0}: Error finding container ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef: Status 404 returned error can't find the container with id ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.589702 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"61bedcc7-14db-4cb4-b3df-04733ce92bb2","Type":"ContainerStarted","Data":"bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2"} Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.592197 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" event={"ID":"8b639e20-8ca7-4b37-8271-ada2858140b9","Type":"ContainerStarted","Data":"ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef"} Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.567670 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2"] Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.568746 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.571415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l72q6" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.579928 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2"] Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.661882 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkh9h\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-kube-api-access-hkh9h\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.661991 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.763830 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkh9h\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-kube-api-access-hkh9h\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.764036 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.799830 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.808968 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkh9h\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-kube-api-access-hkh9h\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.975846 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:48 crc kubenswrapper[4725]: I0120 11:19:48.805121 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2"] Jan 20 11:19:48 crc kubenswrapper[4725]: W0120 11:19:48.846815 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62554d79_c9bb_4b40_9153_989791392664.slice/crio-0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c WatchSource:0}: Error finding container 0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c: Status 404 returned error can't find the container with id 0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c Jan 20 11:19:49 crc kubenswrapper[4725]: I0120 11:19:49.726045 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" event={"ID":"62554d79-c9bb-4b40-9153-989791392664","Type":"ContainerStarted","Data":"0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c"} Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.211018 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8pwdf"] Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.212488 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.217475 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7fgbw" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.236903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8pwdf"] Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.270213 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-bound-sa-token\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.270295 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96f4c\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-kube-api-access-96f4c\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.372110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-bound-sa-token\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.372197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96f4c\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-kube-api-access-96f4c\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.405068 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-bound-sa-token\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.413412 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96f4c\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-kube-api-access-96f4c\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.565126 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.914827 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:19:54 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:19:54+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:19:54 crc kubenswrapper[4725]: > Jan 20 11:19:55 crc kubenswrapper[4725]: I0120 11:19:55.110515 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.078205 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.084914 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.091140 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.091861 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.092124 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.133585 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186656 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186766 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186809 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186837 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186866 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186882 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186931 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186951 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186999 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292135 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292239 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292269 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292288 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292311 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292330 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292367 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292382 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292423 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292473 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292752 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.293552 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.293959 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.293971 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.294270 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.294485 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.295274 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.295345 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.295366 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.299223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.299593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.311705 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.539836 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:59 crc kubenswrapper[4725]: I0120 11:19:59.684740 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:19:59 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:19:59+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:19:59 crc kubenswrapper[4725]: > Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.070266 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.071746 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.082933 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.224774 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.225281 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.225389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326227 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326326 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.327030 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.362689 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.396285 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:04 crc kubenswrapper[4725]: I0120 11:20:04.682877 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:20:04 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:20:04+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:20:04 crc kubenswrapper[4725]: > Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.349141 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.350171 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bncq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-bxlks_cert-manager(8b639e20-8ca7-4b37-8271-ada2858140b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.353410 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" podUID="8b639e20-8ca7-4b37-8271-ada2858140b9" Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.368314 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" podUID="8b639e20-8ca7-4b37-8271-ada2858140b9" Jan 20 11:20:09 crc kubenswrapper[4725]: I0120 11:20:09.634833 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:20:09 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:20:09+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:20:09 crc kubenswrapper[4725]: > Jan 20 11:20:12 crc kubenswrapper[4725]: E0120 11:20:12.923295 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe" Jan 20 11:20:12 crc kubenswrapper[4725]: E0120 11:20:12.924577 4725 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 20 11:20:12 crc kubenswrapper[4725]: init container &Container{Name:manage-dockerfile,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe,Command:[],Args:[openshift-manage-dockerfile --v=0],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:BUILD,Value:{"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"service-telemetry-operator-1","namespace":"service-telemetry","uid":"50b6d2b2-7686-4914-9500-f86942896665","resourceVersion":"34292","generation":1,"creationTimestamp":"2026-01-20T11:19:44Z","labels":{"build":"service-telemetry-operator","buildconfig":"service-telemetry-operator","openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"service-telemetry-operator","uid":"b100e8b9-3104-4055-8964-2638b957a434","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2026-01-20T11:19:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b100e8b9-3104-4055-8964-2638b957a434\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM quay.io/operator-framework/ansible-operator:v1.38.1\n\n# temporarily switch to root user to adjust image layers\nUSER 0\n# Upstream CI builds need the additional EPEL sources for python3-passlib and python3-bcrypt but have no working repos to install epel-release\n# NO_PROXY is undefined in upstream CI builds, but defined (usually blank) during openshift builds (a possibly brittle hack)\nRUN bash -c -- 'if [ \"${NO_PROXY:-__ZZZZZ}\" == \"__ZZZZZ\" ]; then echo \"Applying upstream EPEL hacks\" \u0026\u0026 echo -e \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\nmQINBGE3mOsBEACsU+XwJWDJVkItBaugXhXIIkb9oe+7aadELuVo0kBmc3HXt/Yp\\nCJW9hHEiGZ6z2jwgPqyJjZhCvcAWvgzKcvqE+9i0NItV1rzfxrBe2BtUtZmVcuE6\\n2b+SPfxQ2Hr8llaawRjt8BCFX/ZzM4/1Qk+EzlfTcEcpkMf6wdO7kD6ulBk/tbsW\\nDHX2lNcxszTf+XP9HXHWJlA2xBfP+Dk4gl4DnO2Y1xR0OSywE/QtvEbN5cY94ieu\\nn7CBy29AleMhmbnx9pw3NyxcFIAsEZHJoU4ZW9ulAJ/ogttSyAWeacW7eJGW31/Z\\n39cS+I4KXJgeGRI20RmpqfH0tuT+X5Da59YpjYxkbhSK3HYBVnNPhoJFUc2j5iKy\\nXLgkapu1xRnEJhw05kr4LCbud0NTvfecqSqa+59kuVc+zWmfTnGTYc0PXZ6Oa3rK\\n44UOmE6eAT5zd/ToleDO0VesN+EO7CXfRsm7HWGpABF5wNK3vIEF2uRr2VJMvgqS\\n9eNwhJyOzoca4xFSwCkc6dACGGkV+CqhufdFBhmcAsUotSxe3zmrBjqA0B/nxIvH\\nDVgOAMnVCe+Lmv8T0mFgqZSJdIUdKjnOLu/GRFhjDKIak4jeMBMTYpVnU+HhMHLq\\nuDiZkNEvEEGhBQmZuI8J55F/a6UURnxUwT3piyi3Pmr2IFD7ahBxPzOBCQARAQAB\\ntCdGZWRvcmEgKGVwZWw5KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAk4EEwEI\\nADgWIQT/itE0RZcQbs6BO5GKOHK/MihGfAUCYTeY6wIbDwULCQgHAgYVCgkICwIE\\nFgIDAQIeAQIXgAAKCRCKOHK/MihGfFX/EACBPWv20+ttYu1A5WvtHJPzwbj0U4yF\\n3zTQpBglQ2UfkRpYdipTlT3Ih6j5h2VmgRPtINCc/ZE28adrWpBoeFIS2YAKOCLC\\nnZYtHl2nCoLq1U7FSttUGsZ/t8uGCBgnugTfnIYcmlP1jKKA6RJAclK89evDQX5n\\nR9ZD+Cq3CBMlttvSTCht0qQVlwycedH8iWyYgP/mF0W35BIn7NuuZwWhgR00n/VG\\n4nbKPOzTWbsP45awcmivdrS74P6mL84WfkghipdmcoyVb1B8ZP4Y/Ke0RXOnLhNe\\nCfrXXvuW+Pvg2RTfwRDtehGQPAgXbmLmz2ZkV69RGIr54HJv84NDbqZovRTMr7gL\\n9k3ciCzXCiYQgM8yAyGHV0KEhFSQ1HV7gMnt9UmxbxBE2pGU7vu3CwjYga5DpwU7\\nw5wu1TmM5KgZtZvuWOTDnqDLf0cKoIbW8FeeCOn24elcj32bnQDuF9DPey1mqcvT\\n/yEo/Ushyz6CVYxN8DGgcy2M9JOsnmjDx02h6qgWGWDuKgb9jZrvRedpAQCeemEd\\nfhEs6ihqVxRFl16HxC4EVijybhAL76SsM2nbtIqW1apBQJQpXWtQwwdvgTVpdEtE\\nr4ArVJYX5LrswnWEQMOelugUG6S3ZjMfcyOa/O0364iY73vyVgaYK+2XtT2usMux\\nVL469Kj5m13T6w==\\n=Mjs/\\n-----END PGP PUBLIC KEY BLOCK-----\" \u003e /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 \u0026\u0026 echo -e \"[epel]\\nname=Extra Packages for Enterprise Linux 9 - \\$basearch\\nmetalink=https://mirrors.fedoraproject.org/metalink?repo=epel-9\u0026arch=\\$basearch\u0026infra=\\$infra\u0026content=\\$contentdir\\nenabled=1\\ngpgcheck=1\\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9\" \u003e /etc/yum.repos.d/epel.repo; fi'\n\n# update the base image to allow forward-looking optimistic updates during the testing phase, with the added benefit of helping move closer to passing security scans.\n# -- excludes ansible so it remains at 2.9 tag as shipped with the base image\n# -- installs python3-passlib and python3-bcrypt for oauth-proxy interface\n# -- cleans up the cached data from dnf to keep the image as small as possible\nRUN dnf update -y --exclude=ansible* \u0026\u0026 dnf install -y python3-passlib python3-bcrypt \u0026\u0026 dnf clean all \u0026\u0026 rm -rf /var/cache/dnf\n\nCOPY requirements.yml ${HOME}/requirements.yml\nRUN ansible-galaxy collection install -r ${HOME}/requirements.yml \\\n \u0026\u0026 chmod -R ug+rwx ${HOME}/.ansible\n\n# switch back to user 1001 when running the base image (non-root)\nUSER 1001\n\n# copy in required artifacts for the operator\nCOPY watches.yaml ${HOME}/watches.yaml\nCOPY roles/ ${HOME}/roles/\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e"},"pullSecret":{"name":"builder-dockercfg-ns4k2"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest"},"pushSecret":{"name":"builder-dockercfg-ns4k2"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e","fromRef":{"kind":"ImageStreamTag","name":"ansible-operator:v1.38.1"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest","config":{"kind":"BuildConfig","namespace":"service-telemetry","name":"service-telemetry-operator"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2026-01-20T11:19:44Z","lastTransitionTime":"2026-01-20T11:19:44Z"}]}} Jan 20 11:20:12 crc kubenswrapper[4725]: ,ValueFrom:nil,},EnvVar{Name:LANG,Value:C.utf8,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/registries.conf,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_DIR_PATH,Value:/var/run/configs/openshift.io/build-system/registries.d,ValueFrom:nil,},EnvVar{Name:BUILD_SIGNATURE_POLICY_PATH,Value:/var/run/configs/openshift.io/build-system/policy.json,ValueFrom:nil,},EnvVar{Name:BUILD_STORAGE_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/storage.conf,ValueFrom:nil,},EnvVar{Name:BUILD_BLOBCACHE_DIR,Value:/var/cache/blobs,ValueFrom:nil,},EnvVar{Name:HTTP_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:http_proxy,Value:,ValueFrom:nil,},EnvVar{Name:HTTPS_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:https_proxy,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:no_proxy,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:buildworkdir,ReadOnly:false,MountPath:/tmp/build,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-system-configs,ReadOnly:true,MountPath:/var/run/configs/openshift.io/build-system,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-proxy-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-blob-cache,ReadOnly:false,MountPath:/var/cache/blobs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww4nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[CHOWN DAC_OVERRIDE],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-1-build_service-telemetry(61bedcc7-14db-4cb4-b3df-04733ce92bb2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 20 11:20:12 crc kubenswrapper[4725]: > logger="UnhandledError" Jan 20 11:20:12 crc kubenswrapper[4725]: E0120 11:20:12.925675 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manage-dockerfile\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-1-build" podUID="61bedcc7-14db-4cb4-b3df-04733ce92bb2" Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.586157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" event={"ID":"62554d79-c9bb-4b40-9153-989791392664","Type":"ContainerStarted","Data":"2bf5e4d04ddc60aa888e7534aeb4c84cb529514686c11bb35176038cd25d0012"} Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.657305 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" podStartSLOduration=2.49767967 podStartE2EDuration="26.657278802s" podCreationTimestamp="2026-01-20 11:19:47 +0000 UTC" firstStartedPulling="2026-01-20 11:19:48.849695797 +0000 UTC m=+917.058017770" lastFinishedPulling="2026-01-20 11:20:13.009294929 +0000 UTC m=+941.217616902" observedRunningTime="2026-01-20 11:20:13.618537815 +0000 UTC m=+941.826859788" watchObservedRunningTime="2026-01-20 11:20:13.657278802 +0000 UTC m=+941.865600775" Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.855627 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8pwdf"] Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.876406 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.890827 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 20 11:20:13 crc kubenswrapper[4725]: W0120 11:20:13.967750 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31ab59c_7288_4ebb_82b4_daa77ec5319c.slice/crio-43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d WatchSource:0}: Error finding container 43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d: Status 404 returned error can't find the container with id 43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d Jan 20 11:20:13 crc kubenswrapper[4725]: W0120 11:20:13.974559 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ae1860_8877_4057_a0b3_75cc22dc085a.slice/crio-201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26 WatchSource:0}: Error finding container 201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26: Status 404 returned error can't find the container with id 201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26 Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.195637 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.275435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.275916 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.275997 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276035 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276131 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276167 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276228 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276261 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276305 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276338 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276364 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276394 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277093 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277307 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277664 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277726 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277755 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277766 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277882 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277924 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.278213 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.280989 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.281182 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.281301 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf" (OuterVolumeSpecName: "kube-api-access-ww4nf") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "kube-api-access-ww4nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378577 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378623 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378668 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378685 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378700 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378712 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378724 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378739 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378750 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378763 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378774 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378787 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.592103 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"bdc0063b36b37d06dc379856cb5b0fadc0c09bbabf1f512be45d0a83560cacb7"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.593025 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"61bedcc7-14db-4cb4-b3df-04733ce92bb2","Type":"ContainerDied","Data":"bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.593067 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.594486 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.594531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.596213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" event={"ID":"f31ab59c-7288-4ebb-82b4-daa77ec5319c","Type":"ContainerStarted","Data":"43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.654780 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.654839 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.941125 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61bedcc7-14db-4cb4-b3df-04733ce92bb2" path="/var/lib/kubelet/pods/61bedcc7-14db-4cb4-b3df-04733ce92bb2/volumes" Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.009192 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.604590 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" event={"ID":"f31ab59c-7288-4ebb-82b4-daa77ec5319c","Type":"ContainerStarted","Data":"27a47e4a30c80d1867b3b183dc5f9d11f046145c6c3fa8ee2822bba63ec93501"} Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.609206 4725 generic.go:334] "Generic (PLEG): container finished" podID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerID="f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3" exitCode=0 Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.609253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3"} Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.630805 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" podStartSLOduration=21.630779659 podStartE2EDuration="21.630779659s" podCreationTimestamp="2026-01-20 11:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:20:15.624598545 +0000 UTC m=+943.832920538" watchObservedRunningTime="2026-01-20 11:20:15.630779659 +0000 UTC m=+943.839101642" Jan 20 11:20:16 crc kubenswrapper[4725]: I0120 11:20:16.618571 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6"} Jan 20 11:20:16 crc kubenswrapper[4725]: I0120 11:20:16.621358 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"ee779ff51e27fd1708f83d60b373dec8853b8d3de87c0c17f8dc4fb9cab1a4a0"} Jan 20 11:20:17 crc kubenswrapper[4725]: I0120 11:20:17.632445 4725 generic.go:334] "Generic (PLEG): container finished" podID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerID="52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6" exitCode=0 Jan 20 11:20:17 crc kubenswrapper[4725]: I0120 11:20:17.632549 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6"} Jan 20 11:20:18 crc kubenswrapper[4725]: I0120 11:20:18.647660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e"} Jan 20 11:20:18 crc kubenswrapper[4725]: I0120 11:20:18.700326 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxwx5" podStartSLOduration=13.865715752 podStartE2EDuration="16.70030341s" podCreationTimestamp="2026-01-20 11:20:02 +0000 UTC" firstStartedPulling="2026-01-20 11:20:15.610800831 +0000 UTC m=+943.819122804" lastFinishedPulling="2026-01-20 11:20:18.445388489 +0000 UTC m=+946.653710462" observedRunningTime="2026-01-20 11:20:18.673875019 +0000 UTC m=+946.882196992" watchObservedRunningTime="2026-01-20 11:20:18.70030341 +0000 UTC m=+946.908625383" Jan 20 11:20:19 crc kubenswrapper[4725]: I0120 11:20:19.654703 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" event={"ID":"8b639e20-8ca7-4b37-8271-ada2858140b9","Type":"ContainerStarted","Data":"2add89d30e143f744c1cb17591bebb1eb8eea1f2bd242850edb4af57b7d84569"} Jan 20 11:20:19 crc kubenswrapper[4725]: I0120 11:20:19.655726 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:20:19 crc kubenswrapper[4725]: I0120 11:20:19.743822 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" podStartSLOduration=-9223372001.110983 podStartE2EDuration="35.743792101s" podCreationTimestamp="2026-01-20 11:19:44 +0000 UTC" firstStartedPulling="2026-01-20 11:19:46.138567808 +0000 UTC m=+914.346889781" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:20:19.742097587 +0000 UTC m=+947.950419570" watchObservedRunningTime="2026-01-20 11:20:19.743792101 +0000 UTC m=+947.952114074" Jan 20 11:20:22 crc kubenswrapper[4725]: I0120 11:20:22.396457 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:22 crc kubenswrapper[4725]: I0120 11:20:22.396834 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:22 crc kubenswrapper[4725]: I0120 11:20:22.452851 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:24 crc kubenswrapper[4725]: I0120 11:20:24.688275 4725 generic.go:334] "Generic (PLEG): container finished" podID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerID="ee779ff51e27fd1708f83d60b373dec8853b8d3de87c0c17f8dc4fb9cab1a4a0" exitCode=0 Jan 20 11:20:24 crc kubenswrapper[4725]: I0120 11:20:24.688348 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"ee779ff51e27fd1708f83d60b373dec8853b8d3de87c0c17f8dc4fb9cab1a4a0"} Jan 20 11:20:25 crc kubenswrapper[4725]: I0120 11:20:25.334254 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:20:25 crc kubenswrapper[4725]: I0120 11:20:25.696268 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"43ba998c4f90eecc50d85d25e6ba1a6776cac8c3c2f9a35a8e03f8ec2c0f026b"} Jan 20 11:20:25 crc kubenswrapper[4725]: I0120 11:20:25.736002 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/manage-dockerfile/0.log" Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.707048 4725 generic.go:334] "Generic (PLEG): container finished" podID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerID="43ba998c4f90eecc50d85d25e6ba1a6776cac8c3c2f9a35a8e03f8ec2c0f026b" exitCode=0 Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.707127 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"43ba998c4f90eecc50d85d25e6ba1a6776cac8c3c2f9a35a8e03f8ec2c0f026b"} Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.707513 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"20c11d3a65716216d93d71b1783145f096f25894367021ae7494d22cc9d152e7"} Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.735893 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=28.682577106 podStartE2EDuration="29.735873696s" podCreationTimestamp="2026-01-20 11:19:57 +0000 UTC" firstStartedPulling="2026-01-20 11:20:13.986677763 +0000 UTC m=+942.194999736" lastFinishedPulling="2026-01-20 11:20:15.039974353 +0000 UTC m=+943.248296326" observedRunningTime="2026-01-20 11:20:26.733547503 +0000 UTC m=+954.941869486" watchObservedRunningTime="2026-01-20 11:20:26.735873696 +0000 UTC m=+954.944195689" Jan 20 11:20:32 crc kubenswrapper[4725]: I0120 11:20:32.564656 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:32 crc kubenswrapper[4725]: I0120 11:20:32.621348 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:33 crc kubenswrapper[4725]: I0120 11:20:33.016009 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxwx5" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" containerID="cri-o://cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e" gracePeriod=2 Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.026025 4725 generic.go:334] "Generic (PLEG): container finished" podID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerID="cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e" exitCode=0 Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.026129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e"} Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.379442 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.382774 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"97ae1860-8877-4057-a0b3-75cc22dc085a\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.382833 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"97ae1860-8877-4057-a0b3-75cc22dc085a\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.387510 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"97ae1860-8877-4057-a0b3-75cc22dc085a\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.388950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities" (OuterVolumeSpecName: "utilities") pod "97ae1860-8877-4057-a0b3-75cc22dc085a" (UID: "97ae1860-8877-4057-a0b3-75cc22dc085a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.393777 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj" (OuterVolumeSpecName: "kube-api-access-757jj") pod "97ae1860-8877-4057-a0b3-75cc22dc085a" (UID: "97ae1860-8877-4057-a0b3-75cc22dc085a"). InnerVolumeSpecName "kube-api-access-757jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.452924 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97ae1860-8877-4057-a0b3-75cc22dc085a" (UID: "97ae1860-8877-4057-a0b3-75cc22dc085a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.489873 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.489918 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.489928 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.474955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26"} Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.475037 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.476238 4725 scope.go:117] "RemoveContainer" containerID="cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.513262 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.518143 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.697864 4725 scope.go:117] "RemoveContainer" containerID="52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.721096 4725 scope.go:117] "RemoveContainer" containerID="f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3" Jan 20 11:20:36 crc kubenswrapper[4725]: I0120 11:20:36.943541 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" path="/var/lib/kubelet/pods/97ae1860-8877-4057-a0b3-75cc22dc085a/volumes" Jan 20 11:20:56 crc kubenswrapper[4725]: I0120 11:20:56.728308 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:20:56 crc kubenswrapper[4725]: I0120 11:20:56.729025 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:21:26 crc kubenswrapper[4725]: I0120 11:21:26.728193 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:21:26 crc kubenswrapper[4725]: I0120 11:21:26.728765 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.727692 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.728484 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.728583 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.729591 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.729700 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946" gracePeriod=600 Jan 20 11:21:57 crc kubenswrapper[4725]: I0120 11:21:57.848184 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946" exitCode=0 Jan 20 11:21:57 crc kubenswrapper[4725]: I0120 11:21:57.848255 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946"} Jan 20 11:21:57 crc kubenswrapper[4725]: I0120 11:21:57.848527 4725 scope.go:117] "RemoveContainer" containerID="f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2" Jan 20 11:21:58 crc kubenswrapper[4725]: I0120 11:21:58.857873 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e"} Jan 20 11:22:15 crc kubenswrapper[4725]: I0120 11:22:15.979185 4725 generic.go:334] "Generic (PLEG): container finished" podID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerID="20c11d3a65716216d93d71b1783145f096f25894367021ae7494d22cc9d152e7" exitCode=0 Jan 20 11:22:15 crc kubenswrapper[4725]: I0120 11:22:15.979370 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"20c11d3a65716216d93d71b1783145f096f25894367021ae7494d22cc9d152e7"} Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.269944 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382427 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382609 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382657 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382681 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382704 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382720 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382743 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382776 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382775 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382814 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382911 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382969 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.383027 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.383043 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384612 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384758 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384810 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384826 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.385003 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.385168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.385938 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.397294 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.397358 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.397371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx" (OuterVolumeSpecName: "kube-api-access-wvkdx") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "kube-api-access-wvkdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.425335 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486024 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486071 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486102 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486115 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486126 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486138 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486149 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.566871 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.587997 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:18 crc kubenswrapper[4725]: I0120 11:22:18.003513 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"bdc0063b36b37d06dc379856cb5b0fadc0c09bbabf1f512be45d0a83560cacb7"} Jan 20 11:22:18 crc kubenswrapper[4725]: I0120 11:22:18.003669 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:22:18 crc kubenswrapper[4725]: I0120 11:22:18.003873 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc0063b36b37d06dc379856cb5b0fadc0c09bbabf1f512be45d0a83560cacb7" Jan 20 11:22:19 crc kubenswrapper[4725]: I0120 11:22:19.932508 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:20 crc kubenswrapper[4725]: I0120 11:22:20.025999 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268523 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268895 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-utilities" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268913 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-utilities" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268950 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="docker-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268959 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="docker-build" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268974 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268984 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268998 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-content" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269005 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-content" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.269019 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="git-clone" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269026 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="git-clone" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.269039 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="manage-dockerfile" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269046 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="manage-dockerfile" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269246 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269283 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="docker-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.270256 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.272551 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.275331 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.275981 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.277517 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.290785 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460117 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460199 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460364 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460397 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460434 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460480 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460566 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460597 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460625 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460743 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460888 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562234 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562323 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562345 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562373 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562400 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562431 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562449 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562475 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562498 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562515 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562541 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562692 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563238 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563349 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563655 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.564135 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.564184 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.564670 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.568690 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.568784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.581020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.587964 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.816634 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:23 crc kubenswrapper[4725]: I0120 11:22:23.048716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerStarted","Data":"e143ed9303cc23ae40660343e336bf7f7112b03b8ad95715a6c91ca243263bfc"} Jan 20 11:22:24 crc kubenswrapper[4725]: I0120 11:22:24.059494 4725 generic.go:334] "Generic (PLEG): container finished" podID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerID="135099adb6c1ba00def83244671f6d756db543de8887db638c4aa7e04a4e4320" exitCode=0 Jan 20 11:22:24 crc kubenswrapper[4725]: I0120 11:22:24.059582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerDied","Data":"135099adb6c1ba00def83244671f6d756db543de8887db638c4aa7e04a4e4320"} Jan 20 11:22:25 crc kubenswrapper[4725]: I0120 11:22:25.068991 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerStarted","Data":"97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a"} Jan 20 11:22:25 crc kubenswrapper[4725]: I0120 11:22:25.102207 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.10218538 podStartE2EDuration="3.10218538s" podCreationTimestamp="2026-01-20 11:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:22:25.097334598 +0000 UTC m=+1073.305656581" watchObservedRunningTime="2026-01-20 11:22:25.10218538 +0000 UTC m=+1073.310507353" Jan 20 11:22:33 crc kubenswrapper[4725]: I0120 11:22:33.001163 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:33 crc kubenswrapper[4725]: I0120 11:22:33.002166 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" containerID="cri-o://97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a" gracePeriod=30 Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.596185 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.597874 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.599591 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.599748 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.599856 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600189 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600807 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600953 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601072 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600841 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601550 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601581 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601621 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.638960 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.703517 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.703906 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704056 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704269 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704415 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704574 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704269 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704364 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.705555 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.705717 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.706371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.706733 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.706805 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707228 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707123 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707387 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707722 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.708053 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.708163 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.710843 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.718884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.725279 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.920205 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:35 crc kubenswrapper[4725]: I0120 11:22:35.215281 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 20 11:22:36 crc kubenswrapper[4725]: I0120 11:22:36.150967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerStarted","Data":"98181af6e77e8ee77db38b4f0c99449b03d0506d97628efdcf708eefa0be2fbf"} Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.929813 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_773de81e-167f-4cc1-b0b2-6f97183bc92d/docker-build/0.log" Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.933791 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.993537 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_773de81e-167f-4cc1-b0b2-6f97183bc92d/docker-build/0.log" Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.994364 4725 generic.go:334] "Generic (PLEG): container finished" podID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerID="97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a" exitCode=1 Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.994408 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerDied","Data":"97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a"} Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.994642 4725 scope.go:117] "RemoveContainer" containerID="97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.049371 4725 scope.go:117] "RemoveContainer" containerID="135099adb6c1ba00def83244671f6d756db543de8887db638c4aa7e04a4e4320" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090056 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090224 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090330 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090431 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090523 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090548 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090605 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090663 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090694 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090742 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090780 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090862 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.091694 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.092248 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.093561 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.093662 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.094005 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.094860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.095424 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.095614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.108469 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.108852 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l" (OuterVolumeSpecName: "kube-api-access-fd25l") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "kube-api-access-fd25l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193334 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193645 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193656 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193665 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193674 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193683 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193691 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193701 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193709 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193721 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.233073 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.295274 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.482247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.497958 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.003782 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerStarted","Data":"81fe7e7689ade9572879bd6f7042234d45798c3c4c7d5639d8337cc6cf420f3f"} Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.004917 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerDied","Data":"e143ed9303cc23ae40660343e336bf7f7112b03b8ad95715a6c91ca243263bfc"} Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.004968 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.072144 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.081370 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.940908 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" path="/var/lib/kubelet/pods/773de81e-167f-4cc1-b0b2-6f97183bc92d/volumes" Jan 20 11:22:41 crc kubenswrapper[4725]: I0120 11:22:41.013841 4725 generic.go:334] "Generic (PLEG): container finished" podID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerID="81fe7e7689ade9572879bd6f7042234d45798c3c4c7d5639d8337cc6cf420f3f" exitCode=0 Jan 20 11:22:41 crc kubenswrapper[4725]: I0120 11:22:41.013972 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"81fe7e7689ade9572879bd6f7042234d45798c3c4c7d5639d8337cc6cf420f3f"} Jan 20 11:22:42 crc kubenswrapper[4725]: I0120 11:22:42.026167 4725 generic.go:334] "Generic (PLEG): container finished" podID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerID="4e2384deeb121a865456908896ebca254f51bb793e35d64ef64a12bdeadadd7a" exitCode=0 Jan 20 11:22:42 crc kubenswrapper[4725]: I0120 11:22:42.026262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"4e2384deeb121a865456908896ebca254f51bb793e35d64ef64a12bdeadadd7a"} Jan 20 11:22:42 crc kubenswrapper[4725]: I0120 11:22:42.077993 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/manage-dockerfile/0.log" Jan 20 11:22:43 crc kubenswrapper[4725]: I0120 11:22:43.037273 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerStarted","Data":"8fcccce2002a99e8d036bd2beffff0773e9c3730f24376f47ce2a54c9456a0d8"} Jan 20 11:22:43 crc kubenswrapper[4725]: I0120 11:22:43.067550 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=9.067534198 podStartE2EDuration="9.067534198s" podCreationTimestamp="2026-01-20 11:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:22:43.06503686 +0000 UTC m=+1091.273358853" watchObservedRunningTime="2026-01-20 11:22:43.067534198 +0000 UTC m=+1091.275856171" Jan 20 11:23:59 crc kubenswrapper[4725]: I0120 11:23:59.767723 4725 generic.go:334] "Generic (PLEG): container finished" podID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerID="8fcccce2002a99e8d036bd2beffff0773e9c3730f24376f47ce2a54c9456a0d8" exitCode=0 Jan 20 11:23:59 crc kubenswrapper[4725]: I0120 11:23:59.767797 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"8fcccce2002a99e8d036bd2beffff0773e9c3730f24376f47ce2a54c9456a0d8"} Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.167896 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318451 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318667 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318632 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318717 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318764 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318804 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318859 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318928 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318961 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318991 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319042 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319068 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319119 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319328 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319060 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.320529 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.322770 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.322957 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.323468 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.323924 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.326842 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.329275 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.330630 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c" (OuterVolumeSpecName: "kube-api-access-mtb6c") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "kube-api-access-mtb6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420461 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420509 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420520 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420532 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420542 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420553 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420563 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420572 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420587 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.527639 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.623162 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:02 crc kubenswrapper[4725]: I0120 11:24:02.026841 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"98181af6e77e8ee77db38b4f0c99449b03d0506d97628efdcf708eefa0be2fbf"} Jan 20 11:24:02 crc kubenswrapper[4725]: I0120 11:24:02.026901 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98181af6e77e8ee77db38b4f0c99449b03d0506d97628efdcf708eefa0be2fbf" Jan 20 11:24:02 crc kubenswrapper[4725]: I0120 11:24:02.027051 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:24:03 crc kubenswrapper[4725]: I0120 11:24:03.493707 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:03 crc kubenswrapper[4725]: I0120 11:24:03.544345 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.207953 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210737 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210804 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210836 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210844 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210860 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210868 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210884 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210892 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210902 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="git-clone" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210908 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="git-clone" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.211112 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.211136 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.212281 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.214686 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.215679 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.216221 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.216836 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.241709 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.396970 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397048 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397122 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397191 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397229 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397330 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397368 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397492 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397516 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397562 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499788 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499875 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499899 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499905 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500040 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500561 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500728 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500735 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.501162 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.501466 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.501557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.524695 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.833059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.833072 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.835810 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.836483 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.838139 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:07 crc kubenswrapper[4725]: I0120 11:24:07.117059 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:08 crc kubenswrapper[4725]: I0120 11:24:08.077291 4725 generic.go:334] "Generic (PLEG): container finished" podID="182d8f8c-6787-460f-8886-13e082da325a" containerID="c4f2e6c9a2af8b906bd1ba4f2529ffa261f97bfacfd90048175544cbe8a4306b" exitCode=0 Jan 20 11:24:08 crc kubenswrapper[4725]: I0120 11:24:08.077353 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerDied","Data":"c4f2e6c9a2af8b906bd1ba4f2529ffa261f97bfacfd90048175544cbe8a4306b"} Jan 20 11:24:08 crc kubenswrapper[4725]: I0120 11:24:08.077798 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerStarted","Data":"702a14aac73a2067eb1d2ba924037c10061638d34d12490a8dd8993d2df2b036"} Jan 20 11:24:09 crc kubenswrapper[4725]: I0120 11:24:09.100913 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerStarted","Data":"6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee"} Jan 20 11:24:16 crc kubenswrapper[4725]: I0120 11:24:16.436121 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=10.436098051 podStartE2EDuration="10.436098051s" podCreationTimestamp="2026-01-20 11:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:24:09.134784481 +0000 UTC m=+1177.343106464" watchObservedRunningTime="2026-01-20 11:24:16.436098051 +0000 UTC m=+1184.644420024" Jan 20 11:24:16 crc kubenswrapper[4725]: I0120 11:24:16.437365 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:16 crc kubenswrapper[4725]: I0120 11:24:16.437672 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" containerID="cri-o://6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee" gracePeriod=30 Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.139796 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.141669 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.148433 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.149498 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.149630 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.161981 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.170700 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_182d8f8c-6787-460f-8886-13e082da325a/docker-build/0.log" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.171659 4725 generic.go:334] "Generic (PLEG): container finished" podID="182d8f8c-6787-460f-8886-13e082da325a" containerID="6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee" exitCode=1 Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.171765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerDied","Data":"6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee"} Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316295 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316327 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316362 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316381 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316564 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316591 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316632 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316689 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316768 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418595 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418626 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418718 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418774 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418887 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.419889 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.419928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420025 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420114 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420209 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420320 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420375 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420421 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420628 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420804 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420887 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420986 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.431627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.440504 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.440595 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.460272 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.739047 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_182d8f8c-6787-460f-8886-13e082da325a/docker-build/0.log" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.739703 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826469 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826507 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826591 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826728 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826763 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826786 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826811 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826783 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826839 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826990 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826719 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.827625 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.827648 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.827855 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828118 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828157 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828880 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.831205 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5" (OuterVolumeSpecName: "kube-api-access-j8wb5") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "kube-api-access-j8wb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.831208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.832013 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.922890 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929325 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929356 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929370 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929380 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929389 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929400 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929409 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929418 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929426 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.966645 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.972891 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.031708 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.181497 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_182d8f8c-6787-460f-8886-13e082da325a/docker-build/0.log" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.182129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerDied","Data":"702a14aac73a2067eb1d2ba924037c10061638d34d12490a8dd8993d2df2b036"} Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.182176 4725 scope.go:117] "RemoveContainer" containerID="6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.182306 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.186333 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerStarted","Data":"bd507f0738d2c0694eccbbc95fb5272e4409e25b64f458f756e9a1b54394396a"} Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.239301 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.243679 4725 scope.go:117] "RemoveContainer" containerID="c4f2e6c9a2af8b906bd1ba4f2529ffa261f97bfacfd90048175544cbe8a4306b" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.245059 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:20 crc kubenswrapper[4725]: I0120 11:24:20.194041 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerStarted","Data":"d3ff8338b376ac72548be35879a4a833227c1231f0fa7c77e46446ef53b15d94"} Jan 20 11:24:20 crc kubenswrapper[4725]: I0120 11:24:20.944956 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182d8f8c-6787-460f-8886-13e082da325a" path="/var/lib/kubelet/pods/182d8f8c-6787-460f-8886-13e082da325a/volumes" Jan 20 11:24:21 crc kubenswrapper[4725]: I0120 11:24:21.205421 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerID="d3ff8338b376ac72548be35879a4a833227c1231f0fa7c77e46446ef53b15d94" exitCode=0 Jan 20 11:24:21 crc kubenswrapper[4725]: I0120 11:24:21.205469 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"d3ff8338b376ac72548be35879a4a833227c1231f0fa7c77e46446ef53b15d94"} Jan 20 11:24:22 crc kubenswrapper[4725]: I0120 11:24:22.215941 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerID="032caed499b46e9aa411fe435c34a0b25328813786d4b4a1fa4195b3137ed331" exitCode=0 Jan 20 11:24:22 crc kubenswrapper[4725]: I0120 11:24:22.216386 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"032caed499b46e9aa411fe435c34a0b25328813786d4b4a1fa4195b3137ed331"} Jan 20 11:24:22 crc kubenswrapper[4725]: I0120 11:24:22.257493 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/manage-dockerfile/0.log" Jan 20 11:24:23 crc kubenswrapper[4725]: I0120 11:24:23.233260 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerStarted","Data":"fc80fd4244af16703439fe94645efe3c29505a7b5b8bb53579030c06197a023e"} Jan 20 11:24:23 crc kubenswrapper[4725]: I0120 11:24:23.272552 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.272509208 podStartE2EDuration="5.272509208s" podCreationTimestamp="2026-01-20 11:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:24:23.265165257 +0000 UTC m=+1191.473487250" watchObservedRunningTime="2026-01-20 11:24:23.272509208 +0000 UTC m=+1191.480831181" Jan 20 11:24:26 crc kubenswrapper[4725]: I0120 11:24:26.727935 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:24:26 crc kubenswrapper[4725]: I0120 11:24:26.728438 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:24:56 crc kubenswrapper[4725]: I0120 11:24:56.727779 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:24:56 crc kubenswrapper[4725]: I0120 11:24:56.728801 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.728296 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.729194 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.729271 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.730223 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.730289 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e" gracePeriod=600 Jan 20 11:25:27 crc kubenswrapper[4725]: I0120 11:25:27.439497 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e" exitCode=0 Jan 20 11:25:27 crc kubenswrapper[4725]: I0120 11:25:27.439713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e"} Jan 20 11:25:27 crc kubenswrapper[4725]: I0120 11:25:27.440058 4725 scope.go:117] "RemoveContainer" containerID="617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946" Jan 20 11:25:28 crc kubenswrapper[4725]: I0120 11:25:28.450772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3"} Jan 20 11:27:42 crc kubenswrapper[4725]: I0120 11:27:42.061506 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerID="fc80fd4244af16703439fe94645efe3c29505a7b5b8bb53579030c06197a023e" exitCode=0 Jan 20 11:27:42 crc kubenswrapper[4725]: I0120 11:27:42.061524 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"fc80fd4244af16703439fe94645efe3c29505a7b5b8bb53579030c06197a023e"} Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.323732 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.399908 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400039 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400096 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400121 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400156 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400179 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400208 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400254 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400294 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400298 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400344 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400417 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400488 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.401445 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.401499 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.401789 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400895 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.402230 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.402305 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.408274 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.408331 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.408457 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk" (OuterVolumeSpecName: "kube-api-access-tfxhk") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "kube-api-access-tfxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.413430 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504272 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504691 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504773 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504839 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504910 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504995 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.505064 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.505148 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.505219 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.957375 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.014198 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.084331 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"bd507f0738d2c0694eccbbc95fb5272e4409e25b64f458f756e9a1b54394396a"} Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.084833 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd507f0738d2c0694eccbbc95fb5272e4409e25b64f458f756e9a1b54394396a" Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.084475 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:27:46 crc kubenswrapper[4725]: I0120 11:27:46.003127 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:46 crc kubenswrapper[4725]: I0120 11:27:46.049276 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.928919 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930017 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930041 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930098 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930106 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930120 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930128 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930138 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930146 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930157 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="git-clone" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930163 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="git-clone" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930322 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930362 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.931350 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.937328 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.938284 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.938293 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.944698 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.949166 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996499 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996641 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996829 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997006 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997114 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997261 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997307 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997333 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997358 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997399 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997492 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.099217 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100500 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100631 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100738 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100844 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101281 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101374 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.099995 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101535 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101418 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101618 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101698 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101776 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101844 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.102192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.102682 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.103465 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.109153 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.109224 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.123547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.257211 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.530818 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:27:50 crc kubenswrapper[4725]: I0120 11:27:50.130479 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerStarted","Data":"79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb"} Jan 20 11:27:50 crc kubenswrapper[4725]: I0120 11:27:50.130993 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerStarted","Data":"ef75771651c7edad9549b94a38308f8a219d2601293a77dd16261018ecc03c5a"} Jan 20 11:27:51 crc kubenswrapper[4725]: I0120 11:27:51.142221 4725 generic.go:334] "Generic (PLEG): container finished" podID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerID="79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb" exitCode=0 Jan 20 11:27:51 crc kubenswrapper[4725]: I0120 11:27:51.142334 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerDied","Data":"79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb"} Jan 20 11:27:52 crc kubenswrapper[4725]: I0120 11:27:52.154129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerStarted","Data":"c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a"} Jan 20 11:27:52 crc kubenswrapper[4725]: I0120 11:27:52.185116 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=4.185068231 podStartE2EDuration="4.185068231s" podCreationTimestamp="2026-01-20 11:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:27:52.178064151 +0000 UTC m=+1400.386386144" watchObservedRunningTime="2026-01-20 11:27:52.185068231 +0000 UTC m=+1400.393390204" Jan 20 11:27:56 crc kubenswrapper[4725]: I0120 11:27:56.728293 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:27:56 crc kubenswrapper[4725]: I0120 11:27:56.729203 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.210277 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/docker-build/0.log" Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.211407 4725 generic.go:334] "Generic (PLEG): container finished" podID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerID="c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a" exitCode=1 Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.211475 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerDied","Data":"c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a"} Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.258253 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.474122 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/docker-build/0.log" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.474972 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479891 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479927 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479957 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480047 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480067 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480119 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480143 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480177 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480331 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480334 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480381 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480436 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480486 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481305 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481357 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481298 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481318 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481462 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.482577 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.488204 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.488234 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.488269 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7" (OuterVolumeSpecName: "kube-api-access-xpct7") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "kube-api-access-xpct7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.571025 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.581918 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582360 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582439 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582513 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582626 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582709 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582774 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.583168 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.583230 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.894250 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.914146 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 20 11:28:00 crc kubenswrapper[4725]: E0120 11:28:00.915654 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="manage-dockerfile" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.915703 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="manage-dockerfile" Jan 20 11:28:00 crc kubenswrapper[4725]: E0120 11:28:00.915715 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="docker-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.915724 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="docker-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.915902 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="docker-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.917246 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.922104 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.923344 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.926734 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.928476 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988236 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988298 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988467 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988490 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988546 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988574 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988596 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988668 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988692 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988740 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.089256 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.089328 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.089358 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090360 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090443 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090472 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090495 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090505 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090532 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090861 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090923 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090989 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091040 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091199 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091420 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091597 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.092584 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.093703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.094564 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.109811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.229675 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/docker-build/0.log" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.230042 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerDied","Data":"ef75771651c7edad9549b94a38308f8a219d2601293a77dd16261018ecc03c5a"} Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.230108 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef75771651c7edad9549b94a38308f8a219d2601293a77dd16261018ecc03c5a" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.230179 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.236331 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.257049 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.263422 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.494876 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 20 11:28:02 crc kubenswrapper[4725]: I0120 11:28:02.266464 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerStarted","Data":"a4903596b33031d7aed7600a9e2bb86e46e90e8822bbe874f78076489c05a258"} Jan 20 11:28:02 crc kubenswrapper[4725]: I0120 11:28:02.268300 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerStarted","Data":"6780fdbbe3f7a45599b0514328dfab3ade3905ca8a25ac03e4edfbe11fcd11a8"} Jan 20 11:28:02 crc kubenswrapper[4725]: I0120 11:28:02.941356 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" path="/var/lib/kubelet/pods/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/volumes" Jan 20 11:28:03 crc kubenswrapper[4725]: I0120 11:28:03.280483 4725 generic.go:334] "Generic (PLEG): container finished" podID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerID="a4903596b33031d7aed7600a9e2bb86e46e90e8822bbe874f78076489c05a258" exitCode=0 Jan 20 11:28:03 crc kubenswrapper[4725]: I0120 11:28:03.280660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"a4903596b33031d7aed7600a9e2bb86e46e90e8822bbe874f78076489c05a258"} Jan 20 11:28:04 crc kubenswrapper[4725]: I0120 11:28:04.292441 4725 generic.go:334] "Generic (PLEG): container finished" podID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerID="aebcdd2389cf5555810a810b8ba5ed5db46fceb8094ee87e91d2217e630e31e3" exitCode=0 Jan 20 11:28:04 crc kubenswrapper[4725]: I0120 11:28:04.293025 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"aebcdd2389cf5555810a810b8ba5ed5db46fceb8094ee87e91d2217e630e31e3"} Jan 20 11:28:04 crc kubenswrapper[4725]: I0120 11:28:04.341547 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/manage-dockerfile/0.log" Jan 20 11:28:05 crc kubenswrapper[4725]: I0120 11:28:05.305934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerStarted","Data":"5e42726132cce6cccfbcebe76e994c0bbf095e27ce3388781ab16bb72f1fbb76"} Jan 20 11:28:05 crc kubenswrapper[4725]: I0120 11:28:05.336222 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.336191473 podStartE2EDuration="5.336191473s" podCreationTimestamp="2026-01-20 11:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:28:05.330650319 +0000 UTC m=+1413.538972302" watchObservedRunningTime="2026-01-20 11:28:05.336191473 +0000 UTC m=+1413.544513446" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.760645 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.762973 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.789699 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.908525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.908604 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.908927 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.010480 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.010640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.010679 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.011229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.011628 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.033291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.087681 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.591482 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:19 crc kubenswrapper[4725]: W0120 11:28:19.600419 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcef150c1_b17c_4f6f_8103_016969a51c8d.slice/crio-6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca WatchSource:0}: Error finding container 6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca: Status 404 returned error can't find the container with id 6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.411563 4725 generic.go:334] "Generic (PLEG): container finished" podID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" exitCode=0 Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.411660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005"} Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.411997 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerStarted","Data":"6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca"} Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.414239 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:28:22 crc kubenswrapper[4725]: I0120 11:28:22.430572 4725 generic.go:334] "Generic (PLEG): container finished" podID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" exitCode=0 Jan 20 11:28:22 crc kubenswrapper[4725]: I0120 11:28:22.430624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47"} Jan 20 11:28:24 crc kubenswrapper[4725]: I0120 11:28:24.450337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerStarted","Data":"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b"} Jan 20 11:28:24 crc kubenswrapper[4725]: I0120 11:28:24.509198 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-62jw6" podStartSLOduration=3.6270652439999997 podStartE2EDuration="6.50916597s" podCreationTimestamp="2026-01-20 11:28:18 +0000 UTC" firstStartedPulling="2026-01-20 11:28:20.413769366 +0000 UTC m=+1428.622091339" lastFinishedPulling="2026-01-20 11:28:23.295870072 +0000 UTC m=+1431.504192065" observedRunningTime="2026-01-20 11:28:24.505425022 +0000 UTC m=+1432.713747005" watchObservedRunningTime="2026-01-20 11:28:24.50916597 +0000 UTC m=+1432.717487943" Jan 20 11:28:26 crc kubenswrapper[4725]: I0120 11:28:26.728209 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:28:26 crc kubenswrapper[4725]: I0120 11:28:26.728861 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:28:29 crc kubenswrapper[4725]: I0120 11:28:29.088133 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:29 crc kubenswrapper[4725]: I0120 11:28:29.088228 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:30 crc kubenswrapper[4725]: I0120 11:28:30.143598 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-62jw6" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" probeResult="failure" output=< Jan 20 11:28:30 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:28:30 crc kubenswrapper[4725]: > Jan 20 11:28:39 crc kubenswrapper[4725]: I0120 11:28:39.132821 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:39 crc kubenswrapper[4725]: I0120 11:28:39.178410 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:39 crc kubenswrapper[4725]: I0120 11:28:39.378311 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:40 crc kubenswrapper[4725]: I0120 11:28:40.580241 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-62jw6" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" containerID="cri-o://c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" gracePeriod=2 Jan 20 11:28:40 crc kubenswrapper[4725]: I0120 11:28:40.958744 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.088424 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"cef150c1-b17c-4f6f-8103-016969a51c8d\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.089478 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"cef150c1-b17c-4f6f-8103-016969a51c8d\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.089585 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"cef150c1-b17c-4f6f-8103-016969a51c8d\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.091856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities" (OuterVolumeSpecName: "utilities") pod "cef150c1-b17c-4f6f-8103-016969a51c8d" (UID: "cef150c1-b17c-4f6f-8103-016969a51c8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.108263 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr" (OuterVolumeSpecName: "kube-api-access-wlsjr") pod "cef150c1-b17c-4f6f-8103-016969a51c8d" (UID: "cef150c1-b17c-4f6f-8103-016969a51c8d"). InnerVolumeSpecName "kube-api-access-wlsjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.192581 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.192631 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.237309 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cef150c1-b17c-4f6f-8103-016969a51c8d" (UID: "cef150c1-b17c-4f6f-8103-016969a51c8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.294693 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591782 4725 generic.go:334] "Generic (PLEG): container finished" podID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" exitCode=0 Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591839 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b"} Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591887 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca"} Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591903 4725 scope.go:117] "RemoveContainer" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.592056 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.614692 4725 scope.go:117] "RemoveContainer" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.635001 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.640189 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.650738 4725 scope.go:117] "RemoveContainer" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.668790 4725 scope.go:117] "RemoveContainer" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" Jan 20 11:28:41 crc kubenswrapper[4725]: E0120 11:28:41.669499 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b\": container with ID starting with c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b not found: ID does not exist" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.669544 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b"} err="failed to get container status \"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b\": rpc error: code = NotFound desc = could not find container \"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b\": container with ID starting with c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b not found: ID does not exist" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.669570 4725 scope.go:117] "RemoveContainer" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" Jan 20 11:28:41 crc kubenswrapper[4725]: E0120 11:28:41.670156 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47\": container with ID starting with afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47 not found: ID does not exist" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.670183 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47"} err="failed to get container status \"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47\": rpc error: code = NotFound desc = could not find container \"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47\": container with ID starting with afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47 not found: ID does not exist" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.670200 4725 scope.go:117] "RemoveContainer" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" Jan 20 11:28:41 crc kubenswrapper[4725]: E0120 11:28:41.670610 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005\": container with ID starting with d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005 not found: ID does not exist" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.670635 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005"} err="failed to get container status \"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005\": rpc error: code = NotFound desc = could not find container \"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005\": container with ID starting with d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005 not found: ID does not exist" Jan 20 11:28:42 crc kubenswrapper[4725]: I0120 11:28:42.949970 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" path="/var/lib/kubelet/pods/cef150c1-b17c-4f6f-8103-016969a51c8d/volumes" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.536389 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:28:50 crc kubenswrapper[4725]: E0120 11:28:50.537516 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-content" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537538 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-content" Jan 20 11:28:50 crc kubenswrapper[4725]: E0120 11:28:50.537559 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-utilities" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537567 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-utilities" Jan 20 11:28:50 crc kubenswrapper[4725]: E0120 11:28:50.537580 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537588 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537822 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.538952 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.540822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.540940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.541034 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.562120 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642533 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642592 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642971 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.643183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.678297 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.860284 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.251575 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.675217 4725 generic.go:334] "Generic (PLEG): container finished" podID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" exitCode=0 Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.675307 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e"} Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.675830 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerStarted","Data":"2d4727da80686ae11420fefc3155e2cfb58d10a64c59aa8f6a79ffbd6e6c73e2"} Jan 20 11:28:54 crc kubenswrapper[4725]: I0120 11:28:54.702340 4725 generic.go:334] "Generic (PLEG): container finished" podID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" exitCode=0 Jan 20 11:28:54 crc kubenswrapper[4725]: I0120 11:28:54.703253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959"} Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.728673 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.728787 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.728866 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.729798 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.729861 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3" gracePeriod=600 Jan 20 11:28:57 crc kubenswrapper[4725]: I0120 11:28:57.727524 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3" exitCode=0 Jan 20 11:28:57 crc kubenswrapper[4725]: I0120 11:28:57.727614 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3"} Jan 20 11:28:57 crc kubenswrapper[4725]: I0120 11:28:57.728130 4725 scope.go:117] "RemoveContainer" containerID="aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e" Jan 20 11:28:58 crc kubenswrapper[4725]: I0120 11:28:58.738919 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f"} Jan 20 11:28:58 crc kubenswrapper[4725]: I0120 11:28:58.741196 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerStarted","Data":"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2"} Jan 20 11:28:59 crc kubenswrapper[4725]: I0120 11:28:59.776554 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkfs6" podStartSLOduration=4.129314189 podStartE2EDuration="9.776526199s" podCreationTimestamp="2026-01-20 11:28:50 +0000 UTC" firstStartedPulling="2026-01-20 11:28:52.685286767 +0000 UTC m=+1460.893608760" lastFinishedPulling="2026-01-20 11:28:58.332498797 +0000 UTC m=+1466.540820770" observedRunningTime="2026-01-20 11:28:59.771274714 +0000 UTC m=+1467.979596677" watchObservedRunningTime="2026-01-20 11:28:59.776526199 +0000 UTC m=+1467.984848172" Jan 20 11:29:00 crc kubenswrapper[4725]: I0120 11:29:00.861268 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:00 crc kubenswrapper[4725]: I0120 11:29:00.861355 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:00 crc kubenswrapper[4725]: I0120 11:29:00.942603 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:03 crc kubenswrapper[4725]: I0120 11:29:03.797157 4725 generic.go:334] "Generic (PLEG): container finished" podID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerID="5e42726132cce6cccfbcebe76e994c0bbf095e27ce3388781ab16bb72f1fbb76" exitCode=0 Jan 20 11:29:03 crc kubenswrapper[4725]: I0120 11:29:03.797251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"5e42726132cce6cccfbcebe76e994c0bbf095e27ce3388781ab16bb72f1fbb76"} Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.061575 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200220 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200302 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200344 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200365 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200439 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200464 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200493 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200553 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200558 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200591 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200750 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200786 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201053 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201067 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201662 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201750 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.203058 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.203586 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.204170 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.207643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.208258 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz" (OuterVolumeSpecName: "kube-api-access-dwbcz") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "kube-api-access-dwbcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.209260 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302267 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302748 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302831 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302923 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302989 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.303108 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.303177 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.303251 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.327966 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.404963 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.817034 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"6780fdbbe3f7a45599b0514328dfab3ade3905ca8a25ac03e4edfbe11fcd11a8"} Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.817215 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6780fdbbe3f7a45599b0514328dfab3ade3905ca8a25ac03e4edfbe11fcd11a8" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.817219 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.984835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:06 crc kubenswrapper[4725]: I0120 11:29:06.013886 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.105626 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:10 crc kubenswrapper[4725]: E0120 11:29:10.106863 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="docker-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.106883 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="docker-build" Jan 20 11:29:10 crc kubenswrapper[4725]: E0120 11:29:10.106906 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="manage-dockerfile" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.106914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="manage-dockerfile" Jan 20 11:29:10 crc kubenswrapper[4725]: E0120 11:29:10.106924 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="git-clone" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.106932 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="git-clone" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.107098 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="docker-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.108072 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.110986 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.111256 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.111287 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.123715 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.125590 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283362 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283445 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283484 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283505 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283872 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283948 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283996 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284069 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284115 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284279 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385780 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385907 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385947 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386009 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386036 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386128 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386211 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386247 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386448 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387015 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387319 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387464 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387586 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.388457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.399526 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.399857 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.407281 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.427578 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.650538 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.854126 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerStarted","Data":"36a666488ecd6d15d08d3ab59870b43434b273fc6058e7f55ac7c1ecc6d3a04a"} Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.910813 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.974986 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:29:11 crc kubenswrapper[4725]: I0120 11:29:11.863729 4725 generic.go:334] "Generic (PLEG): container finished" podID="0db9d434-26af-4738-bb93-05cd9b720c87" containerID="c39d6de3e24d8f3a14c460d9395b3e4c5d0c7f4110899d7ced5dff416dd88a6f" exitCode=0 Jan 20 11:29:11 crc kubenswrapper[4725]: I0120 11:29:11.864429 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkfs6" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" containerID="cri-o://dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" gracePeriod=2 Jan 20 11:29:11 crc kubenswrapper[4725]: I0120 11:29:11.863915 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerDied","Data":"c39d6de3e24d8f3a14c460d9395b3e4c5d0c7f4110899d7ced5dff416dd88a6f"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.271263 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.322807 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"19e454ee-77bb-40ff-a78b-661546d1cc26\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.322888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"19e454ee-77bb-40ff-a78b-661546d1cc26\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.323045 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"19e454ee-77bb-40ff-a78b-661546d1cc26\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.324498 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities" (OuterVolumeSpecName: "utilities") pod "19e454ee-77bb-40ff-a78b-661546d1cc26" (UID: "19e454ee-77bb-40ff-a78b-661546d1cc26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.332546 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh" (OuterVolumeSpecName: "kube-api-access-stfjh") pod "19e454ee-77bb-40ff-a78b-661546d1cc26" (UID: "19e454ee-77bb-40ff-a78b-661546d1cc26"). InnerVolumeSpecName "kube-api-access-stfjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.386807 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19e454ee-77bb-40ff-a78b-661546d1cc26" (UID: "19e454ee-77bb-40ff-a78b-661546d1cc26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.425490 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.425534 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.425547 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.879546 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerStarted","Data":"cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.886882 4725 generic.go:334] "Generic (PLEG): container finished" podID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" exitCode=0 Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.886957 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.886966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.887070 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"2d4727da80686ae11420fefc3155e2cfb58d10a64c59aa8f6a79ffbd6e6c73e2"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.887113 4725 scope.go:117] "RemoveContainer" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.910889 4725 scope.go:117] "RemoveContainer" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.918292 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=2.918270725 podStartE2EDuration="2.918270725s" podCreationTimestamp="2026-01-20 11:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:29:12.910690657 +0000 UTC m=+1481.119012650" watchObservedRunningTime="2026-01-20 11:29:12.918270725 +0000 UTC m=+1481.126592688" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.942915 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.944715 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.956864 4725 scope.go:117] "RemoveContainer" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.977272 4725 scope.go:117] "RemoveContainer" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" Jan 20 11:29:12 crc kubenswrapper[4725]: E0120 11:29:12.977902 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2\": container with ID starting with dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2 not found: ID does not exist" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.977950 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2"} err="failed to get container status \"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2\": rpc error: code = NotFound desc = could not find container \"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2\": container with ID starting with dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2 not found: ID does not exist" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.977991 4725 scope.go:117] "RemoveContainer" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" Jan 20 11:29:12 crc kubenswrapper[4725]: E0120 11:29:12.978370 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959\": container with ID starting with 9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959 not found: ID does not exist" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.978425 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959"} err="failed to get container status \"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959\": rpc error: code = NotFound desc = could not find container \"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959\": container with ID starting with 9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959 not found: ID does not exist" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.978472 4725 scope.go:117] "RemoveContainer" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" Jan 20 11:29:12 crc kubenswrapper[4725]: E0120 11:29:12.979105 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e\": container with ID starting with 697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e not found: ID does not exist" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.979169 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e"} err="failed to get container status \"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e\": rpc error: code = NotFound desc = could not find container \"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e\": container with ID starting with 697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e not found: ID does not exist" Jan 20 11:29:14 crc kubenswrapper[4725]: I0120 11:29:14.942028 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" path="/var/lib/kubelet/pods/19e454ee-77bb-40ff-a78b-661546d1cc26/volumes" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.614939 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.615928 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" containerID="cri-o://cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5" gracePeriod=30 Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.948375 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_0db9d434-26af-4738-bb93-05cd9b720c87/docker-build/0.log" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949582 4725 generic.go:334] "Generic (PLEG): container finished" podID="0db9d434-26af-4738-bb93-05cd9b720c87" containerID="cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5" exitCode=1 Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949650 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerDied","Data":"cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5"} Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949690 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerDied","Data":"36a666488ecd6d15d08d3ab59870b43434b273fc6058e7f55ac7c1ecc6d3a04a"} Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949702 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a666488ecd6d15d08d3ab59870b43434b273fc6058e7f55ac7c1ecc6d3a04a" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.984237 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_0db9d434-26af-4738-bb93-05cd9b720c87/docker-build/0.log" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.984875 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125450 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125524 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125587 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125622 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125672 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125731 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125753 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125772 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125797 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125863 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125900 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125932 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.126047 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.126356 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.128494 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.128992 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129103 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129121 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129197 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129210 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129012 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129781 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.135559 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.135873 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.136941 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww" (OuterVolumeSpecName: "kube-api-access-k7hww") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "kube-api-access-k7hww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.212509 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230587 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230822 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230840 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230855 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230891 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230908 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230920 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230932 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.523998 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.536319 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.955799 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.020790 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.033565 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230425 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230780 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-utilities" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230800 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-utilities" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230813 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230821 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230835 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230842 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230863 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-content" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230869 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-content" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230883 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="manage-dockerfile" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230894 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="manage-dockerfile" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.231043 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.231065 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.232221 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.234228 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.235820 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.236018 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.238395 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248342 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248415 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248465 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248519 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248579 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248596 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248617 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248674 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248699 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248719 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.258632 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.349892 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.349960 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.349989 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350015 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350050 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350074 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350130 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350151 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350172 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350425 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350481 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350547 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350585 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351046 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351564 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351697 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.354296 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.355357 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.362934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.377610 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.550781 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.763903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.943286 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" path="/var/lib/kubelet/pods/0db9d434-26af-4738-bb93-05cd9b720c87/volumes" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.986749 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerStarted","Data":"6a9aff2c07fcb35085b065af0d4d52d91283e430a1d195a0198fb4e039bb9494"} Jan 20 11:29:23 crc kubenswrapper[4725]: I0120 11:29:23.997402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerStarted","Data":"4445b1a0e79c8d9eb8c5ea0bd6b3f97b942d1c463c2f1b85d3e880737f51ed91"} Jan 20 11:29:25 crc kubenswrapper[4725]: I0120 11:29:25.007693 4725 generic.go:334] "Generic (PLEG): container finished" podID="851c53a0-c674-49b2-88dc-77da0a70406b" containerID="4445b1a0e79c8d9eb8c5ea0bd6b3f97b942d1c463c2f1b85d3e880737f51ed91" exitCode=0 Jan 20 11:29:25 crc kubenswrapper[4725]: I0120 11:29:25.007780 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"4445b1a0e79c8d9eb8c5ea0bd6b3f97b942d1c463c2f1b85d3e880737f51ed91"} Jan 20 11:29:26 crc kubenswrapper[4725]: I0120 11:29:26.021341 4725 generic.go:334] "Generic (PLEG): container finished" podID="851c53a0-c674-49b2-88dc-77da0a70406b" containerID="6218152dbfd3ab2c2a840223eb50d597295ccfb61dc4dd813ca3437b108d3143" exitCode=0 Jan 20 11:29:26 crc kubenswrapper[4725]: I0120 11:29:26.021469 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"6218152dbfd3ab2c2a840223eb50d597295ccfb61dc4dd813ca3437b108d3143"} Jan 20 11:29:26 crc kubenswrapper[4725]: I0120 11:29:26.077500 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/manage-dockerfile/0.log" Jan 20 11:29:27 crc kubenswrapper[4725]: I0120 11:29:27.034434 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerStarted","Data":"d02a02aca60254ad250ffe6b9525dda6f9b904e95118572ca9b292f09c32136b"} Jan 20 11:29:27 crc kubenswrapper[4725]: I0120 11:29:27.072778 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.072751965 podStartE2EDuration="5.072751965s" podCreationTimestamp="2026-01-20 11:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:29:27.066467997 +0000 UTC m=+1495.274789970" watchObservedRunningTime="2026-01-20 11:29:27.072751965 +0000 UTC m=+1495.281073938" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.158539 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.160643 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.164907 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.165345 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.175717 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.305196 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.305353 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.305798 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.407207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.407293 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.407318 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.408290 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.424999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.426595 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.487060 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.924312 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 11:30:01 crc kubenswrapper[4725]: I0120 11:30:01.307489 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" event={"ID":"0fdb152c-7b26-4ed6-8bb8-6a846224c67b","Type":"ContainerStarted","Data":"9b0dcdea8536fd69cc550db76c797c2b233941b5ed5fc0345fea4348ff9e28b4"} Jan 20 11:30:02 crc kubenswrapper[4725]: I0120 11:30:02.321282 4725 generic.go:334] "Generic (PLEG): container finished" podID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerID="19fb964594f75fcdba986836c9a966bf2aa65e41d99e7666a933d08acb12b332" exitCode=0 Jan 20 11:30:02 crc kubenswrapper[4725]: I0120 11:30:02.321472 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" event={"ID":"0fdb152c-7b26-4ed6-8bb8-6a846224c67b","Type":"ContainerDied","Data":"19fb964594f75fcdba986836c9a966bf2aa65e41d99e7666a933d08acb12b332"} Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.584916 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.759683 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.759798 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.759936 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.761839 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fdb152c-7b26-4ed6-8bb8-6a846224c67b" (UID: "0fdb152c-7b26-4ed6-8bb8-6a846224c67b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.767985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fdb152c-7b26-4ed6-8bb8-6a846224c67b" (UID: "0fdb152c-7b26-4ed6-8bb8-6a846224c67b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.782229 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85" (OuterVolumeSpecName: "kube-api-access-48v85") pod "0fdb152c-7b26-4ed6-8bb8-6a846224c67b" (UID: "0fdb152c-7b26-4ed6-8bb8-6a846224c67b"). InnerVolumeSpecName "kube-api-access-48v85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.861801 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.862353 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.862367 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:04 crc kubenswrapper[4725]: I0120 11:30:04.340813 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" event={"ID":"0fdb152c-7b26-4ed6-8bb8-6a846224c67b","Type":"ContainerDied","Data":"9b0dcdea8536fd69cc550db76c797c2b233941b5ed5fc0345fea4348ff9e28b4"} Jan 20 11:30:04 crc kubenswrapper[4725]: I0120 11:30:04.340887 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0dcdea8536fd69cc550db76c797c2b233941b5ed5fc0345fea4348ff9e28b4" Jan 20 11:30:04 crc kubenswrapper[4725]: I0120 11:30:04.340964 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:20 crc kubenswrapper[4725]: I0120 11:30:20.510834 4725 generic.go:334] "Generic (PLEG): container finished" podID="851c53a0-c674-49b2-88dc-77da0a70406b" containerID="d02a02aca60254ad250ffe6b9525dda6f9b904e95118572ca9b292f09c32136b" exitCode=0 Jan 20 11:30:20 crc kubenswrapper[4725]: I0120 11:30:20.510939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"d02a02aca60254ad250ffe6b9525dda6f9b904e95118572ca9b292f09c32136b"} Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.829017 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954545 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954710 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954741 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954800 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954857 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954881 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954906 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954940 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954967 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955029 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955619 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.956139 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.956712 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.956786 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.957736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.960792 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.973665 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j" (OuterVolumeSpecName: "kube-api-access-88l7j") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "kube-api-access-88l7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.974191 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.974299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057738 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057831 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057845 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057865 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057878 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057891 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057903 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057916 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057945 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057958 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.087514 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.159499 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.529613 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"6a9aff2c07fcb35085b065af0d4d52d91283e430a1d195a0198fb4e039bb9494"} Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.529680 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9aff2c07fcb35085b065af0d4d52d91283e430a1d195a0198fb4e039bb9494" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.529799 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.901543 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.972653 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.584504 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585742 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="manage-dockerfile" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585761 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="manage-dockerfile" Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585791 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerName="collect-profiles" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585797 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerName="collect-profiles" Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585815 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="docker-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585824 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="docker-build" Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585831 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="git-clone" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585837 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="git-clone" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585949 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerName="collect-profiles" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585968 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="docker-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.586760 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.590952 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-ca" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.591957 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.593157 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-sys-config" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.597040 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-global-ca" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.606137 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659278 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659382 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659506 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659578 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659701 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659772 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659793 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659826 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659903 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659954 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659986 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.760969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761578 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761941 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762232 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762440 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762775 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762698 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761698 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.763379 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.763600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.763734 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764046 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764150 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764726 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764720 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.770880 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.770908 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.784724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.909063 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:33 crc kubenswrapper[4725]: I0120 11:30:33.138106 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:33 crc kubenswrapper[4725]: I0120 11:30:33.625576 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerStarted","Data":"322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96"} Jan 20 11:30:33 crc kubenswrapper[4725]: I0120 11:30:33.626164 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerStarted","Data":"3b9caee28289884d1a8f320326ecf12177d8c3af9c0ce2a05fbdbe77cf7afbd5"} Jan 20 11:30:34 crc kubenswrapper[4725]: I0120 11:30:34.636942 4725 generic.go:334] "Generic (PLEG): container finished" podID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerID="322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96" exitCode=0 Jan 20 11:30:34 crc kubenswrapper[4725]: I0120 11:30:34.637020 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerDied","Data":"322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96"} Jan 20 11:30:35 crc kubenswrapper[4725]: I0120 11:30:35.656391 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_f34838ec-7be3-417b-9394-8b6ebffb8dd9/docker-build/0.log" Jan 20 11:30:35 crc kubenswrapper[4725]: I0120 11:30:35.657488 4725 generic.go:334] "Generic (PLEG): container finished" podID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerID="78f02562103ddffde1093928ec6242b4c8b49a6f4ce128c626fad826fff2e675" exitCode=1 Jan 20 11:30:35 crc kubenswrapper[4725]: I0120 11:30:35.657560 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerDied","Data":"78f02562103ddffde1093928ec6242b4c8b49a6f4ce128c626fad826fff2e675"} Jan 20 11:30:36 crc kubenswrapper[4725]: I0120 11:30:36.942260 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_f34838ec-7be3-417b-9394-8b6ebffb8dd9/docker-build/0.log" Jan 20 11:30:36 crc kubenswrapper[4725]: I0120 11:30:36.944356 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033465 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033541 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033600 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033662 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033703 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033880 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034019 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034430 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034798 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034954 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.035053 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.035982 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.039709 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.044384 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.044452 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135167 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135273 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135327 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135354 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135590 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135607 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135619 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135632 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135642 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135653 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135698 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.136487 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.136741 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.139508 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx" (OuterVolumeSpecName: "kube-api-access-4dpcx") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "kube-api-access-4dpcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237469 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237517 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237533 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237544 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.674488 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_f34838ec-7be3-417b-9394-8b6ebffb8dd9/docker-build/0.log" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.675246 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerDied","Data":"3b9caee28289884d1a8f320326ecf12177d8c3af9c0ce2a05fbdbe77cf7afbd5"} Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.675294 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b9caee28289884d1a8f320326ecf12177d8c3af9c0ce2a05fbdbe77cf7afbd5" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.675379 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:43 crc kubenswrapper[4725]: I0120 11:30:43.385856 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:43 crc kubenswrapper[4725]: I0120 11:30:43.391528 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:44 crc kubenswrapper[4725]: I0120 11:30:44.954273 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" path="/var/lib/kubelet/pods/f34838ec-7be3-417b-9394-8b6ebffb8dd9/volumes" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.014841 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 20 11:30:45 crc kubenswrapper[4725]: E0120 11:30:45.015197 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="manage-dockerfile" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.015218 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="manage-dockerfile" Jan 20 11:30:45 crc kubenswrapper[4725]: E0120 11:30:45.015234 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="docker-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.015242 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="docker-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.015380 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="docker-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.017437 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.024604 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-sys-config" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.024805 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-global-ca" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.024941 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-ca" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.025606 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.040961 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060106 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060326 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060360 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060377 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060490 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060809 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060896 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060988 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.061018 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.061101 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.061143 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163335 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163381 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163416 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163447 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163468 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163495 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163512 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163551 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163573 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163714 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164386 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164730 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164798 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.165100 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.165151 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.165484 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.166266 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.170850 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.171528 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.192836 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.360459 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.593462 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.741509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerStarted","Data":"a3ed9de274f153291cfae19b23cc93d0467c036d25f39f703af1ea1d97e74a14"} Jan 20 11:30:46 crc kubenswrapper[4725]: I0120 11:30:46.754893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerStarted","Data":"a228fc05dcf1d258035d853f2b9fb5a0b0fe393defb0dd4411a77e8b1fb737dd"} Jan 20 11:30:47 crc kubenswrapper[4725]: I0120 11:30:47.764451 4725 generic.go:334] "Generic (PLEG): container finished" podID="814e040b-c073-451b-80c4-2e90cb554a6b" containerID="a228fc05dcf1d258035d853f2b9fb5a0b0fe393defb0dd4411a77e8b1fb737dd" exitCode=0 Jan 20 11:30:47 crc kubenswrapper[4725]: I0120 11:30:47.764519 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"a228fc05dcf1d258035d853f2b9fb5a0b0fe393defb0dd4411a77e8b1fb737dd"} Jan 20 11:30:48 crc kubenswrapper[4725]: I0120 11:30:48.775450 4725 generic.go:334] "Generic (PLEG): container finished" podID="814e040b-c073-451b-80c4-2e90cb554a6b" containerID="43a4493455d38e0ab93389748f33cc58cabcde6d5c7b7b59319e8b0f3d4f3e9b" exitCode=0 Jan 20 11:30:48 crc kubenswrapper[4725]: I0120 11:30:48.775552 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"43a4493455d38e0ab93389748f33cc58cabcde6d5c7b7b59319e8b0f3d4f3e9b"} Jan 20 11:30:48 crc kubenswrapper[4725]: I0120 11:30:48.836949 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/manage-dockerfile/0.log" Jan 20 11:30:49 crc kubenswrapper[4725]: I0120 11:30:49.789644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerStarted","Data":"bbb59cdd24eccaccbdc033a2eaf566480990fb577b5ba529dc5d97b6a7bb547f"} Jan 20 11:30:49 crc kubenswrapper[4725]: I0120 11:30:49.823807 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.823776377 podStartE2EDuration="5.823776377s" podCreationTimestamp="2026-01-20 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:30:49.819731079 +0000 UTC m=+1578.028053082" watchObservedRunningTime="2026-01-20 11:30:49.823776377 +0000 UTC m=+1578.032098350" Jan 20 11:30:51 crc kubenswrapper[4725]: I0120 11:30:51.809238 4725 generic.go:334] "Generic (PLEG): container finished" podID="814e040b-c073-451b-80c4-2e90cb554a6b" containerID="bbb59cdd24eccaccbdc033a2eaf566480990fb577b5ba529dc5d97b6a7bb547f" exitCode=0 Jan 20 11:30:51 crc kubenswrapper[4725]: I0120 11:30:51.809343 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"bbb59cdd24eccaccbdc033a2eaf566480990fb577b5ba529dc5d97b6a7bb547f"} Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.092022 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292267 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292418 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292466 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292515 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292556 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292597 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292628 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292734 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292787 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292809 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.293111 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.294178 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.294351 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.294314 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295290 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295381 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295746 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295926 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.299856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk" (OuterVolumeSpecName: "kube-api-access-hxhhk") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "kube-api-access-hxhhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.300120 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.300283 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.301181 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394834 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394889 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394906 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394924 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394935 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394944 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394953 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394964 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394976 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394989 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394999 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.395011 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.829268 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"a3ed9de274f153291cfae19b23cc93d0467c036d25f39f703af1ea1d97e74a14"} Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.829351 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.829355 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ed9de274f153291cfae19b23cc93d0467c036d25f39f703af1ea1d97e74a14" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.576034 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:30:57 crc kubenswrapper[4725]: E0120 11:30:57.577211 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="docker-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577229 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="docker-build" Jan 20 11:30:57 crc kubenswrapper[4725]: E0120 11:30:57.577247 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="manage-dockerfile" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577254 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="manage-dockerfile" Jan 20 11:30:57 crc kubenswrapper[4725]: E0120 11:30:57.577267 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="git-clone" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577278 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="git-clone" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577399 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="docker-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.578262 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.581051 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.581760 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.581960 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.592674 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.599121 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762193 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762277 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762326 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762349 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762496 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762575 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762742 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762862 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763008 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763172 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763223 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763258 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864625 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864725 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864777 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864807 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864847 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864875 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864899 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864924 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864944 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864992 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.865611 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.865812 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.866574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.866699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.866683 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.867150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.868044 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.868651 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.871513 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.874505 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.875712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.890296 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.944145 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.213501 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.876990 4725 generic.go:334] "Generic (PLEG): container finished" podID="3730545e-db48-47ff-bbaf-1374485e0a68" containerID="cbb40b4a35af16ef739d7936989eb2a98cbe2e9f78178e91db6ddf8b1dfef24b" exitCode=0 Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.877056 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerDied","Data":"cbb40b4a35af16ef739d7936989eb2a98cbe2e9f78178e91db6ddf8b1dfef24b"} Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.877515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerStarted","Data":"13c61c9b9dda1fd983408b19827c5cc397f84e67628de40558d79753ad990a7f"} Jan 20 11:30:59 crc kubenswrapper[4725]: I0120 11:30:59.887741 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_3730545e-db48-47ff-bbaf-1374485e0a68/docker-build/0.log" Jan 20 11:30:59 crc kubenswrapper[4725]: I0120 11:30:59.888716 4725 generic.go:334] "Generic (PLEG): container finished" podID="3730545e-db48-47ff-bbaf-1374485e0a68" containerID="697a37843b8a0440d43c4e8976463aac27a527f1025878803dd957ce26ac737d" exitCode=1 Jan 20 11:30:59 crc kubenswrapper[4725]: I0120 11:30:59.888762 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerDied","Data":"697a37843b8a0440d43c4e8976463aac27a527f1025878803dd957ce26ac737d"} Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.171514 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_3730545e-db48-47ff-bbaf-1374485e0a68/docker-build/0.log" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.172321 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326692 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326865 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326903 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326963 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327059 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327113 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327137 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327202 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327234 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327295 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327345 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327818 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327869 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.328789 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.328807 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.329241 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.329404 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.329683 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.330828 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.331532 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.336504 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.336536 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.336705 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f" (OuterVolumeSpecName: "kube-api-access-7ws2f") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "kube-api-access-7ws2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429369 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429435 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429451 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429462 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429474 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429488 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429499 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429509 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429521 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429531 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429539 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429551 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.906521 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_3730545e-db48-47ff-bbaf-1374485e0a68/docker-build/0.log" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.907019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerDied","Data":"13c61c9b9dda1fd983408b19827c5cc397f84e67628de40558d79753ad990a7f"} Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.907066 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c61c9b9dda1fd983408b19827c5cc397f84e67628de40558d79753ad990a7f" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.907118 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:31:08 crc kubenswrapper[4725]: I0120 11:31:08.529451 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:31:08 crc kubenswrapper[4725]: I0120 11:31:08.535291 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:31:08 crc kubenswrapper[4725]: I0120 11:31:08.944668 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" path="/var/lib/kubelet/pods/3730545e-db48-47ff-bbaf-1374485e0a68/volumes" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.155891 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 20 11:31:10 crc kubenswrapper[4725]: E0120 11:31:10.156860 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="manage-dockerfile" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.156884 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="manage-dockerfile" Jan 20 11:31:10 crc kubenswrapper[4725]: E0120 11:31:10.156900 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="docker-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.156908 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="docker-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.157043 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="docker-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.158301 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.161758 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-sys-config" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.162105 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-ca" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.162127 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.163137 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-global-ca" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.183149 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222329 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222397 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222470 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222523 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222687 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222765 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222860 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222913 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.223050 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.223124 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.223176 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324668 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324696 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324744 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324770 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324802 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324830 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324850 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324887 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324875 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324919 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.325148 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.325192 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.325951 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.326265 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.326397 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327317 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327851 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.328131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.333572 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.333922 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.348902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.529319 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.816747 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.986842 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerStarted","Data":"1a2a08d778f3b582e358b59a79dc7afb885edaabd7deb1fae92e438cfc39d404"} Jan 20 11:31:11 crc kubenswrapper[4725]: I0120 11:31:11.998063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerStarted","Data":"095bb767bc7664f78d71c0ee7ec40ec2255564b01b456613aa71fd3e4aaa3bba"} Jan 20 11:31:13 crc kubenswrapper[4725]: I0120 11:31:13.006841 4725 generic.go:334] "Generic (PLEG): container finished" podID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerID="095bb767bc7664f78d71c0ee7ec40ec2255564b01b456613aa71fd3e4aaa3bba" exitCode=0 Jan 20 11:31:13 crc kubenswrapper[4725]: I0120 11:31:13.007415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"095bb767bc7664f78d71c0ee7ec40ec2255564b01b456613aa71fd3e4aaa3bba"} Jan 20 11:31:14 crc kubenswrapper[4725]: I0120 11:31:14.021478 4725 generic.go:334] "Generic (PLEG): container finished" podID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerID="3bde4ee52f0cffd609acce63c5f94debf2d5ab7ddc4ca8c67dfcc4b64f7f72be" exitCode=0 Jan 20 11:31:14 crc kubenswrapper[4725]: I0120 11:31:14.021602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"3bde4ee52f0cffd609acce63c5f94debf2d5ab7ddc4ca8c67dfcc4b64f7f72be"} Jan 20 11:31:14 crc kubenswrapper[4725]: I0120 11:31:14.058610 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/manage-dockerfile/0.log" Jan 20 11:31:15 crc kubenswrapper[4725]: I0120 11:31:15.033678 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerStarted","Data":"4315023da56f5a041d9648d5368227b026e76e7ede2ede61b477a2c92be02303"} Jan 20 11:31:15 crc kubenswrapper[4725]: I0120 11:31:15.067389 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.067359032 podStartE2EDuration="5.067359032s" podCreationTimestamp="2026-01-20 11:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:31:15.060302559 +0000 UTC m=+1603.268624542" watchObservedRunningTime="2026-01-20 11:31:15.067359032 +0000 UTC m=+1603.275681005" Jan 20 11:31:19 crc kubenswrapper[4725]: I0120 11:31:19.066811 4725 generic.go:334] "Generic (PLEG): container finished" podID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerID="4315023da56f5a041d9648d5368227b026e76e7ede2ede61b477a2c92be02303" exitCode=0 Jan 20 11:31:19 crc kubenswrapper[4725]: I0120 11:31:19.066890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"4315023da56f5a041d9648d5368227b026e76e7ede2ede61b477a2c92be02303"} Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.383116 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520355 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520446 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520492 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520526 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520548 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520590 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520630 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520623 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520666 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520843 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520897 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521507 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521550 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521597 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521666 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521768 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.522217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.522372 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.522549 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.531271 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.537423 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh" (OuterVolumeSpecName: "kube-api-access-v4snh") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "kube-api-access-v4snh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.540248 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.551948 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.622907 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.622965 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.622990 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623005 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623019 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623036 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623050 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623060 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623069 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623108 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623125 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:21 crc kubenswrapper[4725]: I0120 11:31:21.088019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"1a2a08d778f3b582e358b59a79dc7afb885edaabd7deb1fae92e438cfc39d404"} Jan 20 11:31:21 crc kubenswrapper[4725]: I0120 11:31:21.088090 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2a08d778f3b582e358b59a79dc7afb885edaabd7deb1fae92e438cfc39d404" Jan 20 11:31:21 crc kubenswrapper[4725]: I0120 11:31:21.088130 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:26 crc kubenswrapper[4725]: I0120 11:31:26.728638 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:31:26 crc kubenswrapper[4725]: I0120 11:31:26.729599 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.765145 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 11:31:38 crc kubenswrapper[4725]: E0120 11:31:38.766054 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="manage-dockerfile" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766069 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="manage-dockerfile" Jan 20 11:31:38 crc kubenswrapper[4725]: E0120 11:31:38.766113 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="git-clone" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766119 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="git-clone" Jan 20 11:31:38 crc kubenswrapper[4725]: E0120 11:31:38.766128 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="docker-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766136 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="docker-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766256 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="docker-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.767238 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770242 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770272 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770367 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770425 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.771144 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.795819 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.909022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910116 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910249 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910386 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910538 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910656 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910711 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910964 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.911028 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.911053 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.911191 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012278 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012368 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012396 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012459 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012497 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012654 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012727 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013282 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013314 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013882 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013941 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.014235 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.014479 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.014531 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.020305 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.026635 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.026850 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.033715 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.116779 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.377414 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.402119 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerStarted","Data":"83f0df430461004a77dbc3f3c45e3d15c682f81a8ac4872c355830d1bd8280b0"} Jan 20 11:31:40 crc kubenswrapper[4725]: I0120 11:31:40.412276 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerStarted","Data":"980011b6035083b7c5ff3e0b221d1f3e58b3e76fa827f1157d70d0d0c290c65a"} Jan 20 11:31:41 crc kubenswrapper[4725]: I0120 11:31:41.424691 4725 generic.go:334] "Generic (PLEG): container finished" podID="184194a7-f32c-4db2-a055-5a776484cda8" containerID="980011b6035083b7c5ff3e0b221d1f3e58b3e76fa827f1157d70d0d0c290c65a" exitCode=0 Jan 20 11:31:41 crc kubenswrapper[4725]: I0120 11:31:41.424787 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"980011b6035083b7c5ff3e0b221d1f3e58b3e76fa827f1157d70d0d0c290c65a"} Jan 20 11:31:42 crc kubenswrapper[4725]: I0120 11:31:42.436031 4725 generic.go:334] "Generic (PLEG): container finished" podID="184194a7-f32c-4db2-a055-5a776484cda8" containerID="546ec06121171d7d920d2290c0da83826529c5af64051bec728234cb8055fc0d" exitCode=0 Jan 20 11:31:42 crc kubenswrapper[4725]: I0120 11:31:42.436251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"546ec06121171d7d920d2290c0da83826529c5af64051bec728234cb8055fc0d"} Jan 20 11:31:42 crc kubenswrapper[4725]: I0120 11:31:42.489664 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/manage-dockerfile/0.log" Jan 20 11:31:43 crc kubenswrapper[4725]: I0120 11:31:43.448580 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerStarted","Data":"4f404a7a9bd4e81eb4ce25e9968cd444dc303fa9ed15549c4f192754e01659a9"} Jan 20 11:31:43 crc kubenswrapper[4725]: I0120 11:31:43.481629 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.481598534 podStartE2EDuration="5.481598534s" podCreationTimestamp="2026-01-20 11:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:31:43.478669972 +0000 UTC m=+1631.686991955" watchObservedRunningTime="2026-01-20 11:31:43.481598534 +0000 UTC m=+1631.689920507" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.156338 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.160890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.172233 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.376940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.377389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.377442 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.478945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.479018 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.479052 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.479815 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.480167 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.532040 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.793725 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:49 crc kubenswrapper[4725]: I0120 11:31:49.326089 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:31:49 crc kubenswrapper[4725]: I0120 11:31:49.502594 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerStarted","Data":"70646bf0be00d4827288d1767e686373f5be91d20ff1f158f20cf715c5460fba"} Jan 20 11:31:51 crc kubenswrapper[4725]: I0120 11:31:51.519017 4725 generic.go:334] "Generic (PLEG): container finished" podID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" exitCode=0 Jan 20 11:31:51 crc kubenswrapper[4725]: I0120 11:31:51.519577 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906"} Jan 20 11:31:53 crc kubenswrapper[4725]: I0120 11:31:53.539961 4725 generic.go:334] "Generic (PLEG): container finished" podID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" exitCode=0 Jan 20 11:31:53 crc kubenswrapper[4725]: I0120 11:31:53.540113 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf"} Jan 20 11:31:56 crc kubenswrapper[4725]: I0120 11:31:56.727852 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:31:56 crc kubenswrapper[4725]: I0120 11:31:56.728491 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:32:01 crc kubenswrapper[4725]: I0120 11:32:01.619238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerStarted","Data":"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747"} Jan 20 11:32:01 crc kubenswrapper[4725]: I0120 11:32:01.648666 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zw8vk" podStartSLOduration=5.135286147 podStartE2EDuration="13.648641234s" podCreationTimestamp="2026-01-20 11:31:48 +0000 UTC" firstStartedPulling="2026-01-20 11:31:51.521379729 +0000 UTC m=+1639.729701712" lastFinishedPulling="2026-01-20 11:32:00.034734826 +0000 UTC m=+1648.243056799" observedRunningTime="2026-01-20 11:32:01.644659619 +0000 UTC m=+1649.852981592" watchObservedRunningTime="2026-01-20 11:32:01.648641234 +0000 UTC m=+1649.856963207" Jan 20 11:32:08 crc kubenswrapper[4725]: I0120 11:32:08.794895 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:08 crc kubenswrapper[4725]: I0120 11:32:08.795963 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:08 crc kubenswrapper[4725]: I0120 11:32:08.846342 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:09 crc kubenswrapper[4725]: I0120 11:32:09.731567 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:09 crc kubenswrapper[4725]: I0120 11:32:09.790853 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:32:11 crc kubenswrapper[4725]: I0120 11:32:11.704023 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zw8vk" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" containerID="cri-o://2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" gracePeriod=2 Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.370888 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.516824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.516912 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.517064 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.518216 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities" (OuterVolumeSpecName: "utilities") pod "b19adb35-c4b0-4602-bb43-78f6e8b51b70" (UID: "b19adb35-c4b0-4602-bb43-78f6e8b51b70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.524907 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s" (OuterVolumeSpecName: "kube-api-access-8fv7s") pod "b19adb35-c4b0-4602-bb43-78f6e8b51b70" (UID: "b19adb35-c4b0-4602-bb43-78f6e8b51b70"). InnerVolumeSpecName "kube-api-access-8fv7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.584051 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b19adb35-c4b0-4602-bb43-78f6e8b51b70" (UID: "b19adb35-c4b0-4602-bb43-78f6e8b51b70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.619171 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.619218 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.619230 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716609 4725 generic.go:334] "Generic (PLEG): container finished" podID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" exitCode=0 Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716731 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716718 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747"} Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"70646bf0be00d4827288d1767e686373f5be91d20ff1f158f20cf715c5460fba"} Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716914 4725 scope.go:117] "RemoveContainer" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.758285 4725 scope.go:117] "RemoveContainer" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.765099 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.791749 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.801201 4725 scope.go:117] "RemoveContainer" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.824925 4725 scope.go:117] "RemoveContainer" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" Jan 20 11:32:12 crc kubenswrapper[4725]: E0120 11:32:12.825594 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747\": container with ID starting with 2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747 not found: ID does not exist" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.825663 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747"} err="failed to get container status \"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747\": rpc error: code = NotFound desc = could not find container \"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747\": container with ID starting with 2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747 not found: ID does not exist" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.825690 4725 scope.go:117] "RemoveContainer" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" Jan 20 11:32:12 crc kubenswrapper[4725]: E0120 11:32:12.826146 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf\": container with ID starting with 9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf not found: ID does not exist" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.826181 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf"} err="failed to get container status \"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf\": rpc error: code = NotFound desc = could not find container \"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf\": container with ID starting with 9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf not found: ID does not exist" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.826200 4725 scope.go:117] "RemoveContainer" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" Jan 20 11:32:12 crc kubenswrapper[4725]: E0120 11:32:12.826760 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906\": container with ID starting with 362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906 not found: ID does not exist" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.826844 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906"} err="failed to get container status \"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906\": rpc error: code = NotFound desc = could not find container \"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906\": container with ID starting with 362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906 not found: ID does not exist" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.944606 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" path="/var/lib/kubelet/pods/b19adb35-c4b0-4602-bb43-78f6e8b51b70/volumes" Jan 20 11:32:16 crc kubenswrapper[4725]: I0120 11:32:16.755315 4725 generic.go:334] "Generic (PLEG): container finished" podID="184194a7-f32c-4db2-a055-5a776484cda8" containerID="4f404a7a9bd4e81eb4ce25e9968cd444dc303fa9ed15549c4f192754e01659a9" exitCode=0 Jan 20 11:32:16 crc kubenswrapper[4725]: I0120 11:32:16.755377 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"4f404a7a9bd4e81eb4ce25e9968cd444dc303fa9ed15549c4f192754e01659a9"} Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.101596 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211423 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211526 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211639 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211693 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211715 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211708 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211740 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211776 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211751 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211856 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211938 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211963 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.212311 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.212324 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213113 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213267 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213655 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.215230 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.221587 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.221682 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.221929 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk" (OuterVolumeSpecName: "kube-api-access-fdbxk") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "kube-api-access-fdbxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.225307 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313593 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313883 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313893 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313906 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313916 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313928 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313938 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313946 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313955 4725 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.494405 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.516213 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.777208 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"83f0df430461004a77dbc3f3c45e3d15c682f81a8ac4872c355830d1bd8280b0"} Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.777277 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f0df430461004a77dbc3f3c45e3d15c682f81a8ac4872c355830d1bd8280b0" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.777478 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.549467 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550400 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="git-clone" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550419 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="git-clone" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550436 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-utilities" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550443 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-utilities" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550454 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="docker-build" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550461 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="docker-build" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550473 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550482 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550492 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="manage-dockerfile" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550499 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="manage-dockerfile" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550508 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-content" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550517 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-content" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550688 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550701 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="docker-build" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.551328 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.554303 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-6qtgx" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.567906 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.651564 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"infrawatch-operators-tppzp\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.752940 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"infrawatch-operators-tppzp\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.776387 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"infrawatch-operators-tppzp\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.907356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.190137 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.267784 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.364202 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.817332 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tppzp" event={"ID":"d34ba0e4-6450-40c0-b870-fa39d91f4340","Type":"ContainerStarted","Data":"ef1c9d46b2251485916f2411ceef68848442ee457d2afecc7f3db523f6fb286a"} Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.137614 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.350573 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-4fmg5"] Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.351728 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.363559 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4fmg5"] Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.501596 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7c6r\" (UniqueName: \"kubernetes.io/projected/514d6114-a2ee-4a88-9798-9a27066ed03a-kube-api-access-q7c6r\") pod \"infrawatch-operators-4fmg5\" (UID: \"514d6114-a2ee-4a88-9798-9a27066ed03a\") " pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.603232 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7c6r\" (UniqueName: \"kubernetes.io/projected/514d6114-a2ee-4a88-9798-9a27066ed03a-kube-api-access-q7c6r\") pod \"infrawatch-operators-4fmg5\" (UID: \"514d6114-a2ee-4a88-9798-9a27066ed03a\") " pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.638213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7c6r\" (UniqueName: \"kubernetes.io/projected/514d6114-a2ee-4a88-9798-9a27066ed03a-kube-api-access-q7c6r\") pod \"infrawatch-operators-4fmg5\" (UID: \"514d6114-a2ee-4a88-9798-9a27066ed03a\") " pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.694170 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.991779 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4fmg5"] Jan 20 11:32:24 crc kubenswrapper[4725]: W0120 11:32:24.007570 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514d6114_a2ee_4a88_9798_9a27066ed03a.slice/crio-fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479 WatchSource:0}: Error finding container fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479: Status 404 returned error can't find the container with id fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479 Jan 20 11:32:24 crc kubenswrapper[4725]: I0120 11:32:24.849817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4fmg5" event={"ID":"514d6114-a2ee-4a88-9798-9a27066ed03a","Type":"ContainerStarted","Data":"fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479"} Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.727945 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.728052 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.728170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.728992 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.729065 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" gracePeriod=600 Jan 20 11:32:27 crc kubenswrapper[4725]: I0120 11:32:27.900771 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" exitCode=0 Jan 20 11:32:27 crc kubenswrapper[4725]: I0120 11:32:27.900845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f"} Jan 20 11:32:27 crc kubenswrapper[4725]: I0120 11:32:27.901263 4725 scope.go:117] "RemoveContainer" containerID="f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3" Jan 20 11:32:28 crc kubenswrapper[4725]: E0120 11:32:28.108872 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:32:28 crc kubenswrapper[4725]: I0120 11:32:28.911040 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:32:28 crc kubenswrapper[4725]: E0120 11:32:28.911413 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.809855 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.811341 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rq7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-tppzp_service-telemetry(d34ba0e4-6450-40c0-b870-fa39d91f4340): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.812881 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/infrawatch-operators-tppzp" podUID="d34ba0e4-6450-40c0-b870-fa39d91f4340" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.826662 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.826884 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7c6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-4fmg5_service-telemetry(514d6114-a2ee-4a88-9798-9a27066ed03a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.828305 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/infrawatch-operators-4fmg5" podUID="514d6114-a2ee-4a88-9798-9a27066ed03a" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.992333 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-4fmg5" podUID="514d6114-a2ee-4a88-9798-9a27066ed03a" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.315272 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.389135 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"d34ba0e4-6450-40c0-b870-fa39d91f4340\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.395736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f" (OuterVolumeSpecName: "kube-api-access-7rq7f") pod "d34ba0e4-6450-40c0-b870-fa39d91f4340" (UID: "d34ba0e4-6450-40c0-b870-fa39d91f4340"). InnerVolumeSpecName "kube-api-access-7rq7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.490972 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.997959 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tppzp" event={"ID":"d34ba0e4-6450-40c0-b870-fa39d91f4340","Type":"ContainerDied","Data":"ef1c9d46b2251485916f2411ceef68848442ee457d2afecc7f3db523f6fb286a"} Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.997986 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:40 crc kubenswrapper[4725]: I0120 11:32:40.053999 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:40 crc kubenswrapper[4725]: I0120 11:32:40.067745 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:40 crc kubenswrapper[4725]: I0120 11:32:40.943653 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34ba0e4-6450-40c0-b870-fa39d91f4340" path="/var/lib/kubelet/pods/d34ba0e4-6450-40c0-b870-fa39d91f4340/volumes" Jan 20 11:32:42 crc kubenswrapper[4725]: I0120 11:32:42.936815 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:32:42 crc kubenswrapper[4725]: E0120 11:32:42.938172 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:32:51 crc kubenswrapper[4725]: I0120 11:32:51.093371 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4fmg5" event={"ID":"514d6114-a2ee-4a88-9798-9a27066ed03a","Type":"ContainerStarted","Data":"e9f4f503b82d1497799639260d7a78206c2b6d7e71cc786895f674c1e78eecfc"} Jan 20 11:32:51 crc kubenswrapper[4725]: I0120 11:32:51.120675 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-4fmg5" podStartSLOduration=1.485649316 podStartE2EDuration="28.120652041s" podCreationTimestamp="2026-01-20 11:32:23 +0000 UTC" firstStartedPulling="2026-01-20 11:32:24.013431428 +0000 UTC m=+1672.221753411" lastFinishedPulling="2026-01-20 11:32:50.648434163 +0000 UTC m=+1698.856756136" observedRunningTime="2026-01-20 11:32:51.115833689 +0000 UTC m=+1699.324155662" watchObservedRunningTime="2026-01-20 11:32:51.120652041 +0000 UTC m=+1699.328974014" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.695696 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.696289 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.842030 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.934214 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:32:53 crc kubenswrapper[4725]: E0120 11:32:53.934515 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:03 crc kubenswrapper[4725]: I0120 11:33:03.731339 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:33:05 crc kubenswrapper[4725]: I0120 11:33:05.932590 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:05 crc kubenswrapper[4725]: E0120 11:33:05.932993 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.229660 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4"] Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.234720 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.248433 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4"] Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.257338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.257655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.257715 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.358979 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359094 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359629 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359741 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.379724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.566898 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.817422 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4"] Jan 20 11:33:09 crc kubenswrapper[4725]: W0120 11:33:09.826477 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c49be43_a86b_4475_8bd3_a1105dd19ad1.slice/crio-3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad WatchSource:0}: Error finding container 3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad: Status 404 returned error can't find the container with id 3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.036471 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75"] Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.038900 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.061791 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75"] Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.175320 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.175433 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.175761 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.245526 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerStarted","Data":"3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad"} Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277050 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277212 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277265 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277989 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.278027 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.300269 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.355114 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.818417 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75"] Jan 20 11:33:10 crc kubenswrapper[4725]: W0120 11:33:10.830915 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d9f6e3_822c_4b9e_a9f1_4f5fa7a8ce83.slice/crio-cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d WatchSource:0}: Error finding container cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d: Status 404 returned error can't find the container with id cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d Jan 20 11:33:11 crc kubenswrapper[4725]: I0120 11:33:11.255299 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerStarted","Data":"cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d"} Jan 20 11:33:14 crc kubenswrapper[4725]: I0120 11:33:14.297511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerStarted","Data":"03cefba0f36e88b3436a6505be4355c483f681b8f10929f9dd65ac558dced7f7"} Jan 20 11:33:14 crc kubenswrapper[4725]: I0120 11:33:14.300654 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerStarted","Data":"44fd2d6a4962c66c239a0537bbedf0e1ea0e729472ffa414c4837765f7b23dda"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.313649 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerID="44fd2d6a4962c66c239a0537bbedf0e1ea0e729472ffa414c4837765f7b23dda" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.314034 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerID="5322f861c1a71f5da86bab990e805725991caaa0a88d6b181fd2a9c80b08ef00" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.313810 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"44fd2d6a4962c66c239a0537bbedf0e1ea0e729472ffa414c4837765f7b23dda"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.314117 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"5322f861c1a71f5da86bab990e805725991caaa0a88d6b181fd2a9c80b08ef00"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317663 4725 generic.go:334] "Generic (PLEG): container finished" podID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerID="03cefba0f36e88b3436a6505be4355c483f681b8f10929f9dd65ac558dced7f7" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317754 4725 generic.go:334] "Generic (PLEG): container finished" podID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerID="a7be2a6ad50c3f3a2562db87b8b10abe4e0c90fa599df8cdfadb6f48b6848f33" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317721 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"03cefba0f36e88b3436a6505be4355c483f681b8f10929f9dd65ac558dced7f7"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317841 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"a7be2a6ad50c3f3a2562db87b8b10abe4e0c90fa599df8cdfadb6f48b6848f33"} Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.331127 4725 generic.go:334] "Generic (PLEG): container finished" podID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerID="e23dca9b6f1fe94344a1bca068cb46f94d215c5d2bdc4f4696f3bd64a221d6d7" exitCode=0 Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.331296 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"e23dca9b6f1fe94344a1bca068cb46f94d215c5d2bdc4f4696f3bd64a221d6d7"} Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.336811 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerID="ef2296c14bddc126931440ba3bf049299e7f9ff33e4cd0358862a289b7825f7c" exitCode=0 Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.336877 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"ef2296c14bddc126931440ba3bf049299e7f9ff33e4cd0358862a289b7825f7c"} Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.613011 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.623942 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.640691 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.640755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.641932 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle" (OuterVolumeSpecName: "bundle") pod "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" (UID: "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.640837 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.642800 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.642870 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.642942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.643990 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.644135 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle" (OuterVolumeSpecName: "bundle") pod "6c49be43-a86b-4475-8bd3-a1105dd19ad1" (UID: "6c49be43-a86b-4475-8bd3-a1105dd19ad1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.648183 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz" (OuterVolumeSpecName: "kube-api-access-8splz") pod "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" (UID: "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83"). InnerVolumeSpecName "kube-api-access-8splz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.648233 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts" (OuterVolumeSpecName: "kube-api-access-zz9ts") pod "6c49be43-a86b-4475-8bd3-a1105dd19ad1" (UID: "6c49be43-a86b-4475-8bd3-a1105dd19ad1"). InnerVolumeSpecName "kube-api-access-zz9ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.666767 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util" (OuterVolumeSpecName: "util") pod "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" (UID: "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.668913 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util" (OuterVolumeSpecName: "util") pod "6c49be43-a86b-4475-8bd3-a1105dd19ad1" (UID: "6c49be43-a86b-4475-8bd3-a1105dd19ad1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746510 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746556 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746581 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746593 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746606 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.933018 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:17 crc kubenswrapper[4725]: E0120 11:33:17.933341 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.368363 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad"} Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.368431 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.368607 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.372096 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d"} Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.372163 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.372319 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.402847 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk"] Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403652 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403669 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403686 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403692 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403708 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403714 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403727 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403734 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403749 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403756 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403767 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403776 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403895 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403919 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.404478 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.409428 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-btv9g" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.420998 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk"] Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.507686 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxdj\" (UniqueName: \"kubernetes.io/projected/288c5de6-7288-478c-b790-1f348c4827f4-kube-api-access-jhxdj\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.507802 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/288c5de6-7288-478c-b790-1f348c4827f4-runner\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.608867 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/288c5de6-7288-478c-b790-1f348c4827f4-runner\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.608965 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhxdj\" (UniqueName: \"kubernetes.io/projected/288c5de6-7288-478c-b790-1f348c4827f4-kube-api-access-jhxdj\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.609554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/288c5de6-7288-478c-b790-1f348c4827f4-runner\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.631999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhxdj\" (UniqueName: \"kubernetes.io/projected/288c5de6-7288-478c-b790-1f348c4827f4-kube-api-access-jhxdj\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.725598 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:22 crc kubenswrapper[4725]: I0120 11:33:22.015377 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk"] Jan 20 11:33:22 crc kubenswrapper[4725]: I0120 11:33:22.028809 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:33:22 crc kubenswrapper[4725]: I0120 11:33:22.406590 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" event={"ID":"288c5de6-7288-478c-b790-1f348c4827f4","Type":"ContainerStarted","Data":"559de67ea891095e76457a3bd24bcea7059b730dcd106995695931a130a8cb47"} Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.456223 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-9d4584887-5t9dx"] Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.457910 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.462693 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-trjzb" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.479828 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-9d4584887-5t9dx"] Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.562799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zf6k\" (UniqueName: \"kubernetes.io/projected/653691a1-9088-47bd-97e2-4d2f17f885bf-kube-api-access-4zf6k\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.563184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/653691a1-9088-47bd-97e2-4d2f17f885bf-runner\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.664434 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zf6k\" (UniqueName: \"kubernetes.io/projected/653691a1-9088-47bd-97e2-4d2f17f885bf-kube-api-access-4zf6k\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.664599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/653691a1-9088-47bd-97e2-4d2f17f885bf-runner\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.665346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/653691a1-9088-47bd-97e2-4d2f17f885bf-runner\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.690258 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zf6k\" (UniqueName: \"kubernetes.io/projected/653691a1-9088-47bd-97e2-4d2f17f885bf-kube-api-access-4zf6k\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.788245 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:25 crc kubenswrapper[4725]: I0120 11:33:25.108961 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-9d4584887-5t9dx"] Jan 20 11:33:25 crc kubenswrapper[4725]: I0120 11:33:25.458171 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" event={"ID":"653691a1-9088-47bd-97e2-4d2f17f885bf","Type":"ContainerStarted","Data":"8bd33e42645799c3eb6694bb46c468ee8d85e8dea1f736fd1ef922b58597829e"} Jan 20 11:33:29 crc kubenswrapper[4725]: I0120 11:33:29.932701 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:29 crc kubenswrapper[4725]: E0120 11:33:29.933559 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.085664 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.086818 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1768908623,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-86d4f8cb59-xtrqk_service-telemetry(288c5de6-7288-478c-b790-1f348c4827f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.087941 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" podUID="288c5de6-7288-478c-b790-1f348c4827f4" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.698414 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" podUID="288c5de6-7288-478c-b790-1f348c4827f4" Jan 20 11:33:44 crc kubenswrapper[4725]: I0120 11:33:44.932253 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:44 crc kubenswrapper[4725]: E0120 11:33:44.933250 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:46 crc kubenswrapper[4725]: I0120 11:33:46.707833 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" event={"ID":"653691a1-9088-47bd-97e2-4d2f17f885bf","Type":"ContainerStarted","Data":"9791f25257498b3668ca277987870437a8b97d840ef7e3456f35603613b24107"} Jan 20 11:33:46 crc kubenswrapper[4725]: I0120 11:33:46.730903 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" podStartSLOduration=1.441243935 podStartE2EDuration="22.730880166s" podCreationTimestamp="2026-01-20 11:33:24 +0000 UTC" firstStartedPulling="2026-01-20 11:33:25.125807117 +0000 UTC m=+1733.334129090" lastFinishedPulling="2026-01-20 11:33:46.415443348 +0000 UTC m=+1754.623765321" observedRunningTime="2026-01-20 11:33:46.728653536 +0000 UTC m=+1754.936975509" watchObservedRunningTime="2026-01-20 11:33:46.730880166 +0000 UTC m=+1754.939202139" Jan 20 11:33:56 crc kubenswrapper[4725]: I0120 11:33:56.933017 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:56 crc kubenswrapper[4725]: E0120 11:33:56.933952 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:58 crc kubenswrapper[4725]: I0120 11:33:58.818624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" event={"ID":"288c5de6-7288-478c-b790-1f348c4827f4","Type":"ContainerStarted","Data":"da4e3cfe0898b44b38d7c038c7438c0d8100cace2c19611d1d7173c81f86732c"} Jan 20 11:33:58 crc kubenswrapper[4725]: I0120 11:33:58.847985 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" podStartSLOduration=1.60321368 podStartE2EDuration="37.847954593s" podCreationTimestamp="2026-01-20 11:33:21 +0000 UTC" firstStartedPulling="2026-01-20 11:33:22.028509383 +0000 UTC m=+1730.236831356" lastFinishedPulling="2026-01-20 11:33:58.273250296 +0000 UTC m=+1766.481572269" observedRunningTime="2026-01-20 11:33:58.844566566 +0000 UTC m=+1767.052888539" watchObservedRunningTime="2026-01-20 11:33:58.847954593 +0000 UTC m=+1767.056276566" Jan 20 11:34:08 crc kubenswrapper[4725]: I0120 11:34:08.932378 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:08 crc kubenswrapper[4725]: E0120 11:34:08.933603 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:13 crc kubenswrapper[4725]: I0120 11:34:13.286888 4725 scope.go:117] "RemoveContainer" containerID="c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a" Jan 20 11:34:13 crc kubenswrapper[4725]: I0120 11:34:13.325380 4725 scope.go:117] "RemoveContainer" containerID="79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.620348 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.626993 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.632757 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.633063 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634065 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634343 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-w6m24" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634497 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634669 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634946 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.660198 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779233 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779339 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779370 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779390 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779459 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.880901 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.880981 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881044 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881099 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881141 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881235 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.882418 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.890289 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.890289 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.891204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.893026 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.893326 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.911140 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.958438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:17 crc kubenswrapper[4725]: I0120 11:34:17.287116 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:34:17 crc kubenswrapper[4725]: I0120 11:34:17.992859 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerStarted","Data":"fc8242d5514e690ee80b2bdcc2ff5977848ca545548efc96d47954b1674d6f08"} Jan 20 11:34:19 crc kubenswrapper[4725]: I0120 11:34:19.933046 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:19 crc kubenswrapper[4725]: E0120 11:34:19.933643 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:26 crc kubenswrapper[4725]: I0120 11:34:26.111323 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerStarted","Data":"c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033"} Jan 20 11:34:26 crc kubenswrapper[4725]: I0120 11:34:26.138415 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" podStartSLOduration=2.453642429 podStartE2EDuration="10.138385808s" podCreationTimestamp="2026-01-20 11:34:16 +0000 UTC" firstStartedPulling="2026-01-20 11:34:17.31773833 +0000 UTC m=+1785.526060303" lastFinishedPulling="2026-01-20 11:34:25.002481709 +0000 UTC m=+1793.210803682" observedRunningTime="2026-01-20 11:34:26.134688562 +0000 UTC m=+1794.343010555" watchObservedRunningTime="2026-01-20 11:34:26.138385808 +0000 UTC m=+1794.346707791" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.502937 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.523964 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.527652 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.527977 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.528013 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-jjxsd" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.528210 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.528310 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530147 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530335 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530408 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530687 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530821 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.543991 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624400 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-web-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624465 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624526 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624573 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-tls-assets\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7b4f\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-kube-api-access-c7b4f\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624695 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d31d6ca-dd83-489d-9956-abb0947df80d-config-out\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624718 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624736 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725243 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725326 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725354 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725417 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-tls-assets\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725465 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7b4f\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-kube-api-access-c7b4f\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725527 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d31d6ca-dd83-489d-9956-abb0947df80d-config-out\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725557 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725587 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-web-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.727691 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: E0120 11:34:30.728330 4725 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.728358 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: E0120 11:34:30.728411 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls podName:7d31d6ca-dd83-489d-9956-abb0947df80d nodeName:}" failed. No retries permitted until 2026-01-20 11:34:31.228387752 +0000 UTC m=+1799.436709745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7d31d6ca-dd83-489d-9956-abb0947df80d") : secret "default-prometheus-proxy-tls" not found Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.729203 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.730148 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.736271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.737448 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.737565 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/908af6317c94b2e5474affd556a5be241a0c727008a51d32804b368dae340079/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.742095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.743565 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-tls-assets\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.746977 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d31d6ca-dd83-489d-9956-abb0947df80d-config-out\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.748162 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7b4f\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-kube-api-access-c7b4f\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.753788 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-web-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.766037 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:31 crc kubenswrapper[4725]: I0120 11:34:31.236274 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:31 crc kubenswrapper[4725]: E0120 11:34:31.236555 4725 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 20 11:34:31 crc kubenswrapper[4725]: E0120 11:34:31.237456 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls podName:7d31d6ca-dd83-489d-9956-abb0947df80d nodeName:}" failed. No retries permitted until 2026-01-20 11:34:32.23743111 +0000 UTC m=+1800.445753083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7d31d6ca-dd83-489d-9956-abb0947df80d") : secret "default-prometheus-proxy-tls" not found Jan 20 11:34:31 crc kubenswrapper[4725]: I0120 11:34:31.932414 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:31 crc kubenswrapper[4725]: E0120 11:34:31.932736 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.252959 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.261214 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.397436 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.643175 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 20 11:34:33 crc kubenswrapper[4725]: I0120 11:34:33.173197 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"d1405246f054d1947c821cc7c3d161838e6c13d415170f5b0b5bb932d8f89acc"} Jan 20 11:34:40 crc kubenswrapper[4725]: I0120 11:34:40.319725 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"72678a1a44d5458fc7e50e2ea55e25f8b66682610319324db3747cf67d49708a"} Jan 20 11:34:42 crc kubenswrapper[4725]: I0120 11:34:42.941213 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:42 crc kubenswrapper[4725]: E0120 11:34:42.942162 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.805406 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-fxcvg"] Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.806666 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.817504 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-fxcvg"] Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.917583 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrmb\" (UniqueName: \"kubernetes.io/projected/c22fff0f-fa8e-40e0-a8dc-a138398b06e7-kube-api-access-fkrmb\") pod \"default-snmp-webhook-6856cfb745-fxcvg\" (UID: \"c22fff0f-fa8e-40e0-a8dc-a138398b06e7\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.019238 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrmb\" (UniqueName: \"kubernetes.io/projected/c22fff0f-fa8e-40e0-a8dc-a138398b06e7-kube-api-access-fkrmb\") pod \"default-snmp-webhook-6856cfb745-fxcvg\" (UID: \"c22fff0f-fa8e-40e0-a8dc-a138398b06e7\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.044508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrmb\" (UniqueName: \"kubernetes.io/projected/c22fff0f-fa8e-40e0-a8dc-a138398b06e7-kube-api-access-fkrmb\") pod \"default-snmp-webhook-6856cfb745-fxcvg\" (UID: \"c22fff0f-fa8e-40e0-a8dc-a138398b06e7\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.127312 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.367048 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-fxcvg"] Jan 20 11:34:46 crc kubenswrapper[4725]: I0120 11:34:46.374898 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" event={"ID":"c22fff0f-fa8e-40e0-a8dc-a138398b06e7","Type":"ContainerStarted","Data":"93a372b28c4f0c6d5a862baa1a11854381ab162740d51d354dc13d27dd09e1c2"} Jan 20 11:34:49 crc kubenswrapper[4725]: I0120 11:34:49.401792 4725 generic.go:334] "Generic (PLEG): container finished" podID="7d31d6ca-dd83-489d-9956-abb0947df80d" containerID="72678a1a44d5458fc7e50e2ea55e25f8b66682610319324db3747cf67d49708a" exitCode=0 Jan 20 11:34:49 crc kubenswrapper[4725]: I0120 11:34:49.402130 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerDied","Data":"72678a1a44d5458fc7e50e2ea55e25f8b66682610319324db3747cf67d49708a"} Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.091996 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.101861 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.102406 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.106328 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.106431 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107248 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107450 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107607 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107753 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-49kjc" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282849 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282925 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-web-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-config-volume\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282983 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283252 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f490a619-9c48-49a0-857b-904084871923-config-out\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283376 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtlxc\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-kube-api-access-dtlxc\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283461 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.391671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-config-volume\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392243 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f490a619-9c48-49a0-857b-904084871923-config-out\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392262 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392292 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtlxc\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-kube-api-access-dtlxc\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392357 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392412 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.393222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-web-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: E0120 11:34:53.393396 4725 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 20 11:34:53 crc kubenswrapper[4725]: E0120 11:34:53.393519 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls podName:f490a619-9c48-49a0-857b-904084871923 nodeName:}" failed. No retries permitted until 2026-01-20 11:34:53.89349349 +0000 UTC m=+1822.101815453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "f490a619-9c48-49a0-857b-904084871923") : secret "default-alertmanager-proxy-tls" not found Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.400564 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.400609 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/334262ccefad4140c333c19789367f9cb48a75b8cc6e1f6bc07181136c225adc/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.411427 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-web-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.412569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-config-volume\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.412666 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.415632 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f490a619-9c48-49a0-857b-904084871923-config-out\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.417693 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.417796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtlxc\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-kube-api-access-dtlxc\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.418885 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.462768 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.908317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.928430 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:54 crc kubenswrapper[4725]: I0120 11:34:54.030058 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:56 crc kubenswrapper[4725]: I0120 11:34:56.340123 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 20 11:34:57 crc kubenswrapper[4725]: I0120 11:34:57.494054 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"9a2b0027ff5aeb5779a8c1ad7b4f7cb9efaea130c14f3240351ce7cfa4b1f4b2"} Jan 20 11:34:57 crc kubenswrapper[4725]: I0120 11:34:57.933804 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:57 crc kubenswrapper[4725]: E0120 11:34:57.935689 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:58 crc kubenswrapper[4725]: I0120 11:34:58.762269 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" event={"ID":"c22fff0f-fa8e-40e0-a8dc-a138398b06e7","Type":"ContainerStarted","Data":"5757bd7654b6e4c606e79a28de828d0b3a966a8ee3b3528d8bac9e6ae3d5dc9d"} Jan 20 11:34:58 crc kubenswrapper[4725]: I0120 11:34:58.800637 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" podStartSLOduration=3.45375073 podStartE2EDuration="14.800604129s" podCreationTimestamp="2026-01-20 11:34:44 +0000 UTC" firstStartedPulling="2026-01-20 11:34:45.379821688 +0000 UTC m=+1813.588143661" lastFinishedPulling="2026-01-20 11:34:56.726675087 +0000 UTC m=+1824.934997060" observedRunningTime="2026-01-20 11:34:58.793477184 +0000 UTC m=+1827.001799167" watchObservedRunningTime="2026-01-20 11:34:58.800604129 +0000 UTC m=+1827.008926102" Jan 20 11:35:09 crc kubenswrapper[4725]: I0120 11:35:09.926656 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"df082aed2f55c214afd488ebe846d87cb0693d738700a6bbba98647e748c15de"} Jan 20 11:35:11 crc kubenswrapper[4725]: E0120 11:35:11.448353 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/prometheus/prometheus:latest" Jan 20 11:35:11 crc kubenswrapper[4725]: E0120 11:35:11.448963 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:quay.io/prometheus/prometheus:latest,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --web.listen-address=127.0.0.1:9090 --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-prometheus-proxy-tls,ReadOnly:true,MountPath:/etc/prometheus/secrets/default-prometheus-proxy-tls,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:true,MountPath:/etc/prometheus/secrets/default-session-secret,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:configmap-serving-certs-ca-bundle,ReadOnly:true,MountPath:/etc/prometheus/configmaps/serving-certs-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-default-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-default-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-default-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7b4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-default-0_service-telemetry(7d31d6ca-dd83-489d-9956-abb0947df80d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:35:11 crc kubenswrapper[4725]: I0120 11:35:11.932720 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:11 crc kubenswrapper[4725]: E0120 11:35:11.933292 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:13 crc kubenswrapper[4725]: I0120 11:35:13.396136 4725 scope.go:117] "RemoveContainer" containerID="cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5" Jan 20 11:35:13 crc kubenswrapper[4725]: I0120 11:35:13.421259 4725 scope.go:117] "RemoveContainer" containerID="c39d6de3e24d8f3a14c460d9395b3e4c5d0c7f4110899d7ced5dff416dd88a6f" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.851837 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g"] Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.857859 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861602 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861698 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861735 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-wn46n" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.874654 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g"] Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.052776 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.052901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd65f\" (UniqueName: \"kubernetes.io/projected/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-kube-api-access-pd65f\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.052977 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.053060 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.053316 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.154732 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.154794 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.154835 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.154963 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.155113 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls podName:10b6bc99-b2ce-4952-a481-bbabe3a3fc16 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:18.655068148 +0000 UTC m=+1846.863390121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" (UID: "10b6bc99-b2ce-4952-a481-bbabe3a3fc16") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.155458 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.155566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.156366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd65f\" (UniqueName: \"kubernetes.io/projected/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-kube-api-access-pd65f\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.156302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.165032 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.183124 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd65f\" (UniqueName: \"kubernetes.io/projected/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-kube-api-access-pd65f\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.663182 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.663435 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.663567 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls podName:10b6bc99-b2ce-4952-a481-bbabe3a3fc16 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:19.663533659 +0000 UTC m=+1847.871855632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" (UID: "10b6bc99-b2ce-4952-a481-bbabe3a3fc16") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:19 crc kubenswrapper[4725]: I0120 11:35:19.706149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:19 crc kubenswrapper[4725]: I0120 11:35:19.712071 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:19 crc kubenswrapper[4725]: I0120 11:35:19.975878 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.271750 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g"] Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.403094 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"09480f9645230bbbc0e55c635977c21a5b4f0d489349232d74325109b2eef5ad"} Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.410452 4725 generic.go:334] "Generic (PLEG): container finished" podID="f490a619-9c48-49a0-857b-904084871923" containerID="df082aed2f55c214afd488ebe846d87cb0693d738700a6bbba98647e748c15de" exitCode=0 Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.410535 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerDied","Data":"df082aed2f55c214afd488ebe846d87cb0693d738700a6bbba98647e748c15de"} Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.586561 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p"] Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.588409 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.616517 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p"] Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.620201 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.620378 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727660 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b74ea17-71c5-47e0-a15e-e963223f11f0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727744 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6b74ea17-71c5-47e0-a15e-e963223f11f0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727791 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlhmr\" (UniqueName: \"kubernetes.io/projected/6b74ea17-71c5-47e0-a15e-e963223f11f0-kube-api-access-qlhmr\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727905 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727980 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.015994 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b74ea17-71c5-47e0-a15e-e963223f11f0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016089 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6b74ea17-71c5-47e0-a15e-e963223f11f0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016133 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlhmr\" (UniqueName: \"kubernetes.io/projected/6b74ea17-71c5-47e0-a15e-e963223f11f0-kube-api-access-qlhmr\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016252 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.017966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b74ea17-71c5-47e0-a15e-e963223f11f0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.020811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6b74ea17-71c5-47e0-a15e-e963223f11f0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.027362 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.027961 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.028018 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls podName:6b74ea17-71c5-47e0-a15e-e963223f11f0 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:21.528000034 +0000 UTC m=+1849.736322007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" (UID: "6b74ea17-71c5-47e0-a15e-e963223f11f0") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.045685 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlhmr\" (UniqueName: \"kubernetes.io/projected/6b74ea17-71c5-47e0-a15e-e963223f11f0-kube-api-access-qlhmr\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.627351 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.627614 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.628013 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls podName:6b74ea17-71c5-47e0-a15e-e963223f11f0 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:22.627980968 +0000 UTC m=+1850.836302941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" (UID: "6b74ea17-71c5-47e0-a15e-e963223f11f0") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:22 crc kubenswrapper[4725]: I0120 11:35:22.851186 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:22 crc kubenswrapper[4725]: I0120 11:35:22.877812 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:23 crc kubenswrapper[4725]: I0120 11:35:23.020569 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:23 crc kubenswrapper[4725]: I0120 11:35:23.582711 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p"] Jan 20 11:35:23 crc kubenswrapper[4725]: I0120 11:35:23.892191 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"c1807948eb9f0450bbf79d5b7abc0df4cb8deeb137fad6aa5f0f7f4580a1680d"} Jan 20 11:35:25 crc kubenswrapper[4725]: I0120 11:35:25.932185 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:25 crc kubenswrapper[4725]: E0120 11:35:25.932695 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.229194 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7"] Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.231884 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.234388 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7"] Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.235749 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.238302 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.025853 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/14922311-0e93-4bf9-8980-72baefd93497-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.025915 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qsvd\" (UniqueName: \"kubernetes.io/projected/14922311-0e93-4bf9-8980-72baefd93497-kube-api-access-6qsvd\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.025985 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.026034 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.026070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/14922311-0e93-4bf9-8980-72baefd93497-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.128113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.128210 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.128284 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/14922311-0e93-4bf9-8980-72baefd93497-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.128423 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.128583 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls podName:14922311-0e93-4bf9-8980-72baefd93497 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:30.628548881 +0000 UTC m=+1858.836871024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" (UID: "14922311-0e93-4bf9-8980-72baefd93497") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.129457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/14922311-0e93-4bf9-8980-72baefd93497-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.129615 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/14922311-0e93-4bf9-8980-72baefd93497-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.129669 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qsvd\" (UniqueName: \"kubernetes.io/projected/14922311-0e93-4bf9-8980-72baefd93497-kube-api-access-6qsvd\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.130220 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/14922311-0e93-4bf9-8980-72baefd93497-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.138069 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.156809 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qsvd\" (UniqueName: \"kubernetes.io/projected/14922311-0e93-4bf9-8980-72baefd93497-kube-api-access-6qsvd\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.185955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"a32622a14d74d1fef0b9d8644fbb1668a79a67a9b61cc46eccad64e34247dc3d"} Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.638544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.638822 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.638895 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls podName:14922311-0e93-4bf9-8980-72baefd93497 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:31.638875588 +0000 UTC m=+1859.847197561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" (UID: "14922311-0e93-4bf9-8980-72baefd93497") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:32 crc kubenswrapper[4725]: I0120 11:35:32.249231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:32 crc kubenswrapper[4725]: I0120 11:35:32.259109 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:32 crc kubenswrapper[4725]: I0120 11:35:32.278606 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.497961 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm"] Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.500046 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.504927 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.505194 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.510939 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm"] Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.608940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/739b7c2c-b11b-4260-a184-7dd184677dad-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.609704 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/739b7c2c-b11b-4260-a184-7dd184677dad-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.609787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/739b7c2c-b11b-4260-a184-7dd184677dad-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.609821 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpd67\" (UniqueName: \"kubernetes.io/projected/739b7c2c-b11b-4260-a184-7dd184677dad-kube-api-access-tpd67\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.711970 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/739b7c2c-b11b-4260-a184-7dd184677dad-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712069 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpd67\" (UniqueName: \"kubernetes.io/projected/739b7c2c-b11b-4260-a184-7dd184677dad-kube-api-access-tpd67\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/739b7c2c-b11b-4260-a184-7dd184677dad-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712192 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/739b7c2c-b11b-4260-a184-7dd184677dad-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/739b7c2c-b11b-4260-a184-7dd184677dad-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.713510 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/739b7c2c-b11b-4260-a184-7dd184677dad-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.722860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/739b7c2c-b11b-4260-a184-7dd184677dad-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.734955 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpd67\" (UniqueName: \"kubernetes.io/projected/739b7c2c-b11b-4260-a184-7dd184677dad-kube-api-access-tpd67\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.827588 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:39 crc kubenswrapper[4725]: I0120 11:35:39.270860 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7"] Jan 20 11:35:40 crc kubenswrapper[4725]: W0120 11:35:40.690667 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14922311_0e93_4bf9_8980_72baefd93497.slice/crio-76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36 WatchSource:0}: Error finding container 76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36: Status 404 returned error can't find the container with id 76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36 Jan 20 11:35:40 crc kubenswrapper[4725]: I0120 11:35:40.934563 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:40 crc kubenswrapper[4725]: E0120 11:35:40.934874 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.024294 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q"] Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.025716 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.034673 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036461 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9h8\" (UniqueName: \"kubernetes.io/projected/f84a2726-80cb-4393-84ca-d901b4ee446c-kube-api-access-qt9h8\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036559 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f84a2726-80cb-4393-84ca-d901b4ee446c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f84a2726-80cb-4393-84ca-d901b4ee446c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036805 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f84a2726-80cb-4393-84ca-d901b4ee446c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.046029 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q"] Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f84a2726-80cb-4393-84ca-d901b4ee446c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359726 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt9h8\" (UniqueName: \"kubernetes.io/projected/f84a2726-80cb-4393-84ca-d901b4ee446c-kube-api-access-qt9h8\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359767 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f84a2726-80cb-4393-84ca-d901b4ee446c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359791 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f84a2726-80cb-4393-84ca-d901b4ee446c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.360717 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f84a2726-80cb-4393-84ca-d901b4ee446c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.360945 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f84a2726-80cb-4393-84ca-d901b4ee446c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.371613 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f84a2726-80cb-4393-84ca-d901b4ee446c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.378392 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36"} Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.411554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt9h8\" (UniqueName: \"kubernetes.io/projected/f84a2726-80cb-4393-84ca-d901b4ee446c-kube-api-access-qt9h8\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:42 crc kubenswrapper[4725]: I0120 11:35:42.096550 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:42 crc kubenswrapper[4725]: E0120 11:35:42.269376 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/prometheus/alertmanager:latest" Jan 20 11:35:42 crc kubenswrapper[4725]: E0120 11:35:42.270208 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:quay.io/prometheus/alertmanager:latest,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address= --web.listen-address=127.0.0.1:9093 --web.route-prefix=/ --cluster.label=service-telemetry/default --cluster.peer=alertmanager-default-0.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-default-db,ReadOnly:false,MountPath:/alertmanager,SubPath:alertmanager-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-alertmanager-proxy-tls,ReadOnly:true,MountPath:/etc/alertmanager/secrets/default-alertmanager-proxy-tls,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:true,MountPath:/etc/alertmanager/secrets/default-session-secret,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtlxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-default-0_service-telemetry(f490a619-9c48-49a0-857b-904084871923): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:35:42 crc kubenswrapper[4725]: I0120 11:35:42.377348 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm"] Jan 20 11:35:42 crc kubenswrapper[4725]: I0120 11:35:42.650489 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q"] Jan 20 11:35:42 crc kubenswrapper[4725]: E0120 11:35:42.662467 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/prometheus-default-0" podUID="7d31d6ca-dd83-489d-9956-abb0947df80d" Jan 20 11:35:42 crc kubenswrapper[4725]: W0120 11:35:42.741736 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf84a2726_80cb_4393_84ca_d901b4ee446c.slice/crio-cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4 WatchSource:0}: Error finding container cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4: Status 404 returned error can't find the container with id cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4 Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.404440 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"06025181e7d1785f1eb470fbc77262ed1b338faab91737ca343db668e1da738f"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.406321 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"fd9ffc31121519069aab88569a92795991439a2dab1cfe307a62785a7775eed8"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.409710 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"60897b0526705a2bdce96de8120b2996c6f51009d27d30aefb09adbcc70ac9e2"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.417966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"0ed63706cd2277df0141d3ae50126099eb108aac690c3d09a51e5e52583aeace"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.421039 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"8cfe9fcda51a6cb8fa2fa3b7829b9ab0376307df990d61d5332c1a2f4369185c"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.422108 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4"} Jan 20 11:35:44 crc kubenswrapper[4725]: I0120 11:35:44.434003 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.459670 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.470193 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.475644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"08463e2706f1274310390e99a78738fc5eb6369194877cd2f67e4058ae8a432d"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.480044 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.485492 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.527684 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.173438717 podStartE2EDuration="1m16.527650712s" podCreationTimestamp="2026-01-20 11:34:29 +0000 UTC" firstStartedPulling="2026-01-20 11:34:32.64854933 +0000 UTC m=+1800.856871303" lastFinishedPulling="2026-01-20 11:35:45.002761325 +0000 UTC m=+1873.211083298" observedRunningTime="2026-01-20 11:35:45.515789058 +0000 UTC m=+1873.724111021" watchObservedRunningTime="2026-01-20 11:35:45.527650712 +0000 UTC m=+1873.735972695" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.398950 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.399578 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.480046 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.521939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"54b53c28ca160cc4b6173817b4e4bfe5c780f0e152b146502bf9e2df7e4447d2"} Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.592330 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:51 crc kubenswrapper[4725]: E0120 11:35:51.615722 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/alertmanager-default-0" podUID="f490a619-9c48-49a0-857b-904084871923" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.164887 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"66329f3e90880289692694b45ebd6bf2e64cef907cc78afe5beb6936040098ab"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.172389 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"b14e74a778b54bb58919bce0d6c9488250e61d2b9e051f515581fc0d551630c6"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.191222 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"97d36175ed4533e74d822f61e733dcd1ca814923e08ac5ca15b90c1e2d54406f"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.195328 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"f578675e78ff6f4f683d9738a43c02568cab6e08b81345f7e1a8019fd6a79081"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.208625 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"89af6fb1acca926234ff8753cf7ae0cd7083606c275a08159c5bcfa057659025"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.213884 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"b54981290931d4392f20f30488dfb1fb7da473a0e2b8274fcedb37ebfd1d216a"} Jan 20 11:35:52 crc kubenswrapper[4725]: E0120 11:35:52.215883 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/prometheus/alertmanager:latest\\\"\"" pod="service-telemetry/alertmanager-default-0" podUID="f490a619-9c48-49a0-857b-904084871923" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.217719 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" podStartSLOduration=5.33374102 podStartE2EDuration="14.21767643s" podCreationTimestamp="2026-01-20 11:35:38 +0000 UTC" firstStartedPulling="2026-01-20 11:35:42.415395996 +0000 UTC m=+1870.623717959" lastFinishedPulling="2026-01-20 11:35:51.299331406 +0000 UTC m=+1879.507653369" observedRunningTime="2026-01-20 11:35:52.203549835 +0000 UTC m=+1880.411871808" watchObservedRunningTime="2026-01-20 11:35:52.21767643 +0000 UTC m=+1880.425998403" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.232372 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" podStartSLOduration=4.62839938 podStartE2EDuration="32.232346613s" podCreationTimestamp="2026-01-20 11:35:20 +0000 UTC" firstStartedPulling="2026-01-20 11:35:23.601990602 +0000 UTC m=+1851.810312575" lastFinishedPulling="2026-01-20 11:35:51.205937835 +0000 UTC m=+1879.414259808" observedRunningTime="2026-01-20 11:35:52.23067554 +0000 UTC m=+1880.438997513" watchObservedRunningTime="2026-01-20 11:35:52.232346613 +0000 UTC m=+1880.440668586" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.263510 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" podStartSLOduration=12.731688865 podStartE2EDuration="23.263480224s" podCreationTimestamp="2026-01-20 11:35:29 +0000 UTC" firstStartedPulling="2026-01-20 11:35:40.704234583 +0000 UTC m=+1868.912556556" lastFinishedPulling="2026-01-20 11:35:51.236025942 +0000 UTC m=+1879.444347915" observedRunningTime="2026-01-20 11:35:52.263255617 +0000 UTC m=+1880.471577590" watchObservedRunningTime="2026-01-20 11:35:52.263480224 +0000 UTC m=+1880.471802197" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.305763 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" podStartSLOduration=4.297600789 podStartE2EDuration="35.305735315s" podCreationTimestamp="2026-01-20 11:35:17 +0000 UTC" firstStartedPulling="2026-01-20 11:35:20.292215643 +0000 UTC m=+1848.500537616" lastFinishedPulling="2026-01-20 11:35:51.300350169 +0000 UTC m=+1879.508672142" observedRunningTime="2026-01-20 11:35:52.299758057 +0000 UTC m=+1880.508080050" watchObservedRunningTime="2026-01-20 11:35:52.305735315 +0000 UTC m=+1880.514057288" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.337350 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" podStartSLOduration=2.915387005 podStartE2EDuration="11.337324401s" podCreationTimestamp="2026-01-20 11:35:41 +0000 UTC" firstStartedPulling="2026-01-20 11:35:42.764196565 +0000 UTC m=+1870.972518528" lastFinishedPulling="2026-01-20 11:35:51.186133961 +0000 UTC m=+1879.394455924" observedRunningTime="2026-01-20 11:35:52.334804781 +0000 UTC m=+1880.543126754" watchObservedRunningTime="2026-01-20 11:35:52.337324401 +0000 UTC m=+1880.545646374" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.937702 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:52 crc kubenswrapper[4725]: E0120 11:35:52.937963 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:56 crc kubenswrapper[4725]: I0120 11:35:56.599946 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"e7a78ef86418e44d2998c4a3a047af9b9fad08bcc5d9709e4ac74eb55dfde9e1"} Jan 20 11:35:56 crc kubenswrapper[4725]: I0120 11:35:56.642242 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=30.458544344 podStartE2EDuration="1m5.642214047s" podCreationTimestamp="2026-01-20 11:34:51 +0000 UTC" firstStartedPulling="2026-01-20 11:35:20.412972377 +0000 UTC m=+1848.621294350" lastFinishedPulling="2026-01-20 11:35:55.59664208 +0000 UTC m=+1883.804964053" observedRunningTime="2026-01-20 11:35:56.633840462 +0000 UTC m=+1884.842162455" watchObservedRunningTime="2026-01-20 11:35:56.642214047 +0000 UTC m=+1884.850536020" Jan 20 11:35:59 crc kubenswrapper[4725]: I0120 11:35:59.983732 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:35:59 crc kubenswrapper[4725]: I0120 11:35:59.984501 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" containerID="cri-o://c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033" gracePeriod=30 Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.180358 4725 generic.go:334] "Generic (PLEG): container finished" podID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerID="c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033" exitCode=0 Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.180440 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerDied","Data":"c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033"} Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.186002 4725 generic.go:334] "Generic (PLEG): container finished" podID="10b6bc99-b2ce-4952-a481-bbabe3a3fc16" containerID="213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5" exitCode=0 Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.186058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerDied","Data":"213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5"} Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.187041 4725 scope.go:117] "RemoveContainer" containerID="213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.486041 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566689 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566761 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566796 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566889 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566992 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.567063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.568205 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.568415 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.568821 4725 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.575525 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.575562 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.579920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66" (OuterVolumeSpecName: "kube-api-access-ndc66") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "kube-api-access-ndc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.587247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.598774 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.602938 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670395 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670448 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670468 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670487 4725 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670509 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670524 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.153848 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-mqfr7"] Jan 20 11:36:01 crc kubenswrapper[4725]: E0120 11:36:01.154329 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.154388 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.154581 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.155352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.175697 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-mqfr7"] Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179443 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntl9k\" (UniqueName: \"kubernetes.io/projected/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-kube-api-access-ntl9k\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179482 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-config\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179590 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-users\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179679 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.200692 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerDied","Data":"fc8242d5514e690ee80b2bdcc2ff5977848ca545548efc96d47954b1674d6f08"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.200786 4725 scope.go:117] "RemoveContainer" containerID="c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.201346 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.205617 4725 generic.go:334] "Generic (PLEG): container finished" podID="14922311-0e93-4bf9-8980-72baefd93497" containerID="0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.205768 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerDied","Data":"0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.207131 4725 scope.go:117] "RemoveContainer" containerID="0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.236575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.240312 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.241883 4725 generic.go:334] "Generic (PLEG): container finished" podID="f84a2726-80cb-4393-84ca-d901b4ee446c" containerID="bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.242038 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerDied","Data":"bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.251063 4725 generic.go:334] "Generic (PLEG): container finished" podID="739b7c2c-b11b-4260-a184-7dd184677dad" containerID="a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.251199 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerDied","Data":"a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.251579 4725 scope.go:117] "RemoveContainer" containerID="bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.252137 4725 scope.go:117] "RemoveContainer" containerID="a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.254321 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.275875 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerDied","Data":"85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.276930 4725 scope.go:117] "RemoveContainer" containerID="85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.277851 4725 generic.go:334] "Generic (PLEG): container finished" podID="6b74ea17-71c5-47e0-a15e-e963223f11f0" containerID="85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.309181 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.310750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.310944 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntl9k\" (UniqueName: \"kubernetes.io/projected/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-kube-api-access-ntl9k\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.310982 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.318502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-config\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.319356 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.319424 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-users\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.319731 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-config\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.337949 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.338184 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-users\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.342440 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.343035 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.345302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntl9k\" (UniqueName: \"kubernetes.io/projected/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-kube-api-access-ntl9k\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.352211 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.474839 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.918388 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-mqfr7"] Jan 20 11:36:01 crc kubenswrapper[4725]: W0120 11:36:01.941027 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b2eb85b_dd29_4dc6_9d02_1087e7119ae7.slice/crio-27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337 WatchSource:0}: Error finding container 27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337: Status 404 returned error can't find the container with id 27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337 Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.291782 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.298526 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.305139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.314411 4725 generic.go:334] "Generic (PLEG): container finished" podID="10b6bc99-b2ce-4952-a481-bbabe3a3fc16" containerID="f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24" exitCode=0 Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.314499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerDied","Data":"f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.314811 4725 scope.go:117] "RemoveContainer" containerID="213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5" Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.315782 4725 scope.go:117] "RemoveContainer" containerID="f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24" Jan 20 11:36:02 crc kubenswrapper[4725]: E0120 11:36:02.316579 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_service-telemetry(10b6bc99-b2ce-4952-a481-bbabe3a3fc16)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" podUID="10b6bc99-b2ce-4952-a481-bbabe3a3fc16" Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.319063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" event={"ID":"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7","Type":"ContainerStarted","Data":"ef50400501ae4fe7c570eb4d055f1e801792ee905dca11d7fff720f1b1cc625a"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.319161 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" event={"ID":"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7","Type":"ContainerStarted","Data":"27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.331045 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.519749 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" podStartSLOduration=3.519715451 podStartE2EDuration="3.519715451s" podCreationTimestamp="2026-01-20 11:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:36:02.486153608 +0000 UTC m=+1890.694475601" watchObservedRunningTime="2026-01-20 11:36:02.519715451 +0000 UTC m=+1890.728037434" Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.999898 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" path="/var/lib/kubelet/pods/a7ed1b92-041f-4075-bbc5-89e61158d803/volumes" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.276405 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739b7c2c_b11b_4260_a184_7dd184677dad.slice/crio-conmon-a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512.scope\": RecentStats: unable to find data in memory cache]" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.346084 4725 generic.go:334] "Generic (PLEG): container finished" podID="6b74ea17-71c5-47e0-a15e-e963223f11f0" containerID="933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.346653 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerDied","Data":"933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.346704 4725 scope.go:117] "RemoveContainer" containerID="85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.347632 4725 scope.go:117] "RemoveContainer" containerID="933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.347971 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_service-telemetry(6b74ea17-71c5-47e0-a15e-e963223f11f0)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" podUID="6b74ea17-71c5-47e0-a15e-e963223f11f0" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.360200 4725 generic.go:334] "Generic (PLEG): container finished" podID="14922311-0e93-4bf9-8980-72baefd93497" containerID="a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.360815 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerDied","Data":"a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.362101 4725 scope.go:117] "RemoveContainer" containerID="a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.362499 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_service-telemetry(14922311-0e93-4bf9-8980-72baefd93497)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" podUID="14922311-0e93-4bf9-8980-72baefd93497" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.377239 4725 generic.go:334] "Generic (PLEG): container finished" podID="f84a2726-80cb-4393-84ca-d901b4ee446c" containerID="868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.377310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerDied","Data":"868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.378356 4725 scope.go:117] "RemoveContainer" containerID="868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.398697 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_service-telemetry(f84a2726-80cb-4393-84ca-d901b4ee446c)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" podUID="f84a2726-80cb-4393-84ca-d901b4ee446c" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.399003 4725 generic.go:334] "Generic (PLEG): container finished" podID="739b7c2c-b11b-4260-a184-7dd184677dad" containerID="a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.399172 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerDied","Data":"a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.399968 4725 scope.go:117] "RemoveContainer" containerID="a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.400370 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-ff457bf89-458zm_service-telemetry(739b7c2c-b11b-4260-a184-7dd184677dad)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" podUID="739b7c2c-b11b-4260-a184-7dd184677dad" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.430370 4725 scope.go:117] "RemoveContainer" containerID="0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.501740 4725 scope.go:117] "RemoveContainer" containerID="bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.551871 4725 scope.go:117] "RemoveContainer" containerID="a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3" Jan 20 11:36:04 crc kubenswrapper[4725]: I0120 11:36:04.932347 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:04 crc kubenswrapper[4725]: E0120 11:36:04.932592 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.891651 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.893325 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.895877 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.896214 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.920453 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.066225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/879163eb-1e0f-4030-aec9-69331c2e5ecd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.066307 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/879163eb-1e0f-4030-aec9-69331c2e5ecd-qdr-test-config\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.066680 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8b2\" (UniqueName: \"kubernetes.io/projected/879163eb-1e0f-4030-aec9-69331c2e5ecd-kube-api-access-5q8b2\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.168490 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/879163eb-1e0f-4030-aec9-69331c2e5ecd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.168575 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/879163eb-1e0f-4030-aec9-69331c2e5ecd-qdr-test-config\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.168655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8b2\" (UniqueName: \"kubernetes.io/projected/879163eb-1e0f-4030-aec9-69331c2e5ecd-kube-api-access-5q8b2\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.170633 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/879163eb-1e0f-4030-aec9-69331c2e5ecd-qdr-test-config\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.183829 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/879163eb-1e0f-4030-aec9-69331c2e5ecd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.198916 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8b2\" (UniqueName: \"kubernetes.io/projected/879163eb-1e0f-4030-aec9-69331c2e5ecd-kube-api-access-5q8b2\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.245533 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.535706 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 20 11:36:08 crc kubenswrapper[4725]: I0120 11:36:08.465961 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"879163eb-1e0f-4030-aec9-69331c2e5ecd","Type":"ContainerStarted","Data":"da993ebcd8b9228232ab084c858484929b255999d4f0715660ec5ee17652eb67"} Jan 20 11:36:12 crc kubenswrapper[4725]: I0120 11:36:12.996294 4725 scope.go:117] "RemoveContainer" containerID="f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24" Jan 20 11:36:14 crc kubenswrapper[4725]: I0120 11:36:14.932102 4725 scope.go:117] "RemoveContainer" containerID="868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7" Jan 20 11:36:14 crc kubenswrapper[4725]: I0120 11:36:14.932736 4725 scope.go:117] "RemoveContainer" containerID="a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512" Jan 20 11:36:16 crc kubenswrapper[4725]: I0120 11:36:16.933122 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:16 crc kubenswrapper[4725]: I0120 11:36:16.933220 4725 scope.go:117] "RemoveContainer" containerID="a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605" Jan 20 11:36:16 crc kubenswrapper[4725]: E0120 11:36:16.933529 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:17 crc kubenswrapper[4725]: I0120 11:36:17.932508 4725 scope.go:117] "RemoveContainer" containerID="933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd" Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.547557 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"4c228724f4e58a02ba325639e1f96f37ea92c44426e3e68588ae4f2d2f4ac377"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.551584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"7ec44d266962c93700d463ae8888809f4e095d672e0f139d7b68c8bf45fb1aa5"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.597315 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"401c2e3ccec1b76d610118cfaaaf7b350b782353b93f785559a1c4b50a8c6ae6"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.605813 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"0f71d171fc42eedd96b197809e210772b9866ec69c7bf42b7ddc238b4cc06796"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.622756 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"879163eb-1e0f-4030-aec9-69331c2e5ecd","Type":"ContainerStarted","Data":"549e75fbed0185c0c494da95bccd3cd34e90cffcf46c0dc0491ab85ac7ed11cc"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.640766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"9118a6dea9c5c8d5d0335ad2409ad563a8078a6b0c6a8d4a32446b247a75d423"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.744394 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.290329099 podStartE2EDuration="14.74435841s" podCreationTimestamp="2026-01-20 11:36:06 +0000 UTC" firstStartedPulling="2026-01-20 11:36:07.545727692 +0000 UTC m=+1895.754049665" lastFinishedPulling="2026-01-20 11:36:19.999757003 +0000 UTC m=+1908.208078976" observedRunningTime="2026-01-20 11:36:20.741652414 +0000 UTC m=+1908.949974407" watchObservedRunningTime="2026-01-20 11:36:20.74435841 +0000 UTC m=+1908.952680383" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.092854 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-phjxw"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.094557 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.097917 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.097998 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.098901 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.098910 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.099249 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.099503 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.173291 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-phjxw"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208253 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208285 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208376 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208408 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.310941 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311036 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311187 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311228 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311264 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.312387 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.312486 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.312515 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.313166 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.313448 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.313867 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.340717 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.417858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.503158 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.508450 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.516111 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.617228 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"curl\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.719426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"curl\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.751023 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"curl\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.805245 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-phjxw"] Jan 20 11:36:21 crc kubenswrapper[4725]: W0120 11:36:21.817622 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e274138_1522_41f2_8021_9f425af23d2e.slice/crio-e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be WatchSource:0}: Error finding container e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be: Status 404 returned error can't find the container with id e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.855971 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:22 crc kubenswrapper[4725]: I0120 11:36:22.368035 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 20 11:36:22 crc kubenswrapper[4725]: I0120 11:36:22.675613 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerStarted","Data":"e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be"} Jan 20 11:36:22 crc kubenswrapper[4725]: I0120 11:36:22.681149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"650f5183-3a46-4da1-befe-a96b43c85a6e","Type":"ContainerStarted","Data":"a490f5b22bfba6cd0b89d78a402f51a5e98d798b3da34bb7f8ae944b4ab7f5f4"} Jan 20 11:36:31 crc kubenswrapper[4725]: I0120 11:36:27.933224 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:31 crc kubenswrapper[4725]: E0120 11:36:27.934422 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:34 crc kubenswrapper[4725]: I0120 11:36:34.955303 4725 generic.go:334] "Generic (PLEG): container finished" podID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerID="86066600570f530a32f6940cdab38a7b29b48d19dbe081cc9e4d1ce34109f5bc" exitCode=0 Jan 20 11:36:34 crc kubenswrapper[4725]: I0120 11:36:34.956293 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"650f5183-3a46-4da1-befe-a96b43c85a6e","Type":"ContainerDied","Data":"86066600570f530a32f6940cdab38a7b29b48d19dbe081cc9e4d1ce34109f5bc"} Jan 20 11:36:41 crc kubenswrapper[4725]: I0120 11:36:41.938895 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:41 crc kubenswrapper[4725]: E0120 11:36:41.939971 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.689229 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.777960 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"650f5183-3a46-4da1-befe-a96b43c85a6e\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.784289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh" (OuterVolumeSpecName: "kube-api-access-bt5nh") pod "650f5183-3a46-4da1-befe-a96b43c85a6e" (UID: "650f5183-3a46-4da1-befe-a96b43c85a6e"). InnerVolumeSpecName "kube-api-access-bt5nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.848111 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_650f5183-3a46-4da1-befe-a96b43c85a6e/curl/0.log" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.880804 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.055166 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerStarted","Data":"2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c"} Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.057601 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"650f5183-3a46-4da1-befe-a96b43c85a6e","Type":"ContainerDied","Data":"a490f5b22bfba6cd0b89d78a402f51a5e98d798b3da34bb7f8ae944b4ab7f5f4"} Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.057642 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a490f5b22bfba6cd0b89d78a402f51a5e98d798b3da34bb7f8ae944b4ab7f5f4" Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.057681 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.091542 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:36:52 crc kubenswrapper[4725]: I0120 11:36:52.379699 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerStarted","Data":"b6e3347cd1127e0cb9014bb89ae882927f09f07ba800282ebb6c076670a28aa0"} Jan 20 11:36:52 crc kubenswrapper[4725]: I0120 11:36:52.405851 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-phjxw" podStartSLOduration=1.428668712 podStartE2EDuration="31.40583621s" podCreationTimestamp="2026-01-20 11:36:21 +0000 UTC" firstStartedPulling="2026-01-20 11:36:21.822524179 +0000 UTC m=+1910.030846162" lastFinishedPulling="2026-01-20 11:36:51.799691687 +0000 UTC m=+1940.008013660" observedRunningTime="2026-01-20 11:36:52.403132484 +0000 UTC m=+1940.611454457" watchObservedRunningTime="2026-01-20 11:36:52.40583621 +0000 UTC m=+1940.614158183" Jan 20 11:36:53 crc kubenswrapper[4725]: I0120 11:36:53.933128 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:53 crc kubenswrapper[4725]: E0120 11:36:53.933717 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:37:08 crc kubenswrapper[4725]: I0120 11:37:08.072006 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:37:08 crc kubenswrapper[4725]: E0120 11:37:08.075004 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:37:14 crc kubenswrapper[4725]: I0120 11:37:14.221933 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.632889 4725 scope.go:117] "RemoveContainer" containerID="cbb40b4a35af16ef739d7936989eb2a98cbe2e9f78178e91db6ddf8b1dfef24b" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.669199 4725 scope.go:117] "RemoveContainer" containerID="697a37843b8a0440d43c4e8976463aac27a527f1025878803dd957ce26ac737d" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.706853 4725 scope.go:117] "RemoveContainer" containerID="322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.738595 4725 scope.go:117] "RemoveContainer" containerID="78f02562103ddffde1093928ec6242b4c8b49a6f4ce128c626fad826fff2e675" Jan 20 11:37:18 crc kubenswrapper[4725]: I0120 11:37:18.599387 4725 generic.go:334] "Generic (PLEG): container finished" podID="3e274138-1522-41f2-8021-9f425af23d2e" containerID="2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c" exitCode=1 Jan 20 11:37:18 crc kubenswrapper[4725]: I0120 11:37:18.599462 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerDied","Data":"2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c"} Jan 20 11:37:18 crc kubenswrapper[4725]: I0120 11:37:18.600625 4725 scope.go:117] "RemoveContainer" containerID="2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c" Jan 20 11:37:20 crc kubenswrapper[4725]: I0120 11:37:20.932613 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:37:20 crc kubenswrapper[4725]: E0120 11:37:20.932931 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:37:24 crc kubenswrapper[4725]: I0120 11:37:24.653776 4725 generic.go:334] "Generic (PLEG): container finished" podID="3e274138-1522-41f2-8021-9f425af23d2e" containerID="b6e3347cd1127e0cb9014bb89ae882927f09f07ba800282ebb6c076670a28aa0" exitCode=1 Jan 20 11:37:24 crc kubenswrapper[4725]: I0120 11:37:24.653884 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerDied","Data":"b6e3347cd1127e0cb9014bb89ae882927f09f07ba800282ebb6c076670a28aa0"} Jan 20 11:37:25 crc kubenswrapper[4725]: I0120 11:37:25.922278 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.101959 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102225 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102349 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102578 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102617 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102789 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.110350 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6" (OuterVolumeSpecName: "kube-api-access-btsq6") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "kube-api-access-btsq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.122944 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.123007 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.124878 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.125755 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.126799 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.128376 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205348 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205392 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205690 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205708 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205721 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205836 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205847 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.676415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerDied","Data":"e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be"} Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.676476 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.676993 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:37:31 crc kubenswrapper[4725]: I0120 11:37:31.933729 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:37:32 crc kubenswrapper[4725]: I0120 11:37:32.748552 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da"} Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.030259 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-n5jwb"] Jan 20 11:37:33 crc kubenswrapper[4725]: E0120 11:37:33.031095 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerName="curl" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031115 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerName="curl" Jan 20 11:37:33 crc kubenswrapper[4725]: E0120 11:37:33.031137 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-collectd" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031147 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-collectd" Jan 20 11:37:33 crc kubenswrapper[4725]: E0120 11:37:33.031175 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-ceilometer" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031189 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-ceilometer" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031355 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerName="curl" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031382 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-collectd" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031401 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-ceilometer" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.032422 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.040504 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.041050 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.041610 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.041821 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.063857 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.064968 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.078810 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-n5jwb"] Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.132861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.132949 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.132988 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133125 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133267 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133528 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.234955 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235109 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235158 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235203 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235237 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.236443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.237198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.237832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.238654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.239236 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.239589 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.259482 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.364410 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.637401 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-n5jwb"] Jan 20 11:37:33 crc kubenswrapper[4725]: W0120 11:37:33.641002 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98772f19_fcd3_4ee3_91e7_aa87154c3c50.slice/crio-6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4 WatchSource:0}: Error finding container 6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4: Status 404 returned error can't find the container with id 6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4 Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.758731 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerStarted","Data":"6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4"} Jan 20 11:37:34 crc kubenswrapper[4725]: I0120 11:37:34.771244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerStarted","Data":"4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2"} Jan 20 11:37:34 crc kubenswrapper[4725]: I0120 11:37:34.771727 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerStarted","Data":"4cf47d02e83874585e6aa2dc72086299ea42bb5eaa1e6208d969a381f36e3229"} Jan 20 11:37:34 crc kubenswrapper[4725]: I0120 11:37:34.794041 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" podStartSLOduration=1.7939993570000001 podStartE2EDuration="1.793999357s" podCreationTimestamp="2026-01-20 11:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:37:34.79280609 +0000 UTC m=+1983.001128053" watchObservedRunningTime="2026-01-20 11:37:34.793999357 +0000 UTC m=+1983.002321330" Jan 20 11:38:07 crc kubenswrapper[4725]: I0120 11:38:07.068647 4725 generic.go:334] "Generic (PLEG): container finished" podID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerID="4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2" exitCode=1 Jan 20 11:38:07 crc kubenswrapper[4725]: I0120 11:38:07.068746 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerDied","Data":"4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2"} Jan 20 11:38:07 crc kubenswrapper[4725]: I0120 11:38:07.070621 4725 scope.go:117] "RemoveContainer" containerID="4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2" Jan 20 11:38:08 crc kubenswrapper[4725]: I0120 11:38:08.080587 4725 generic.go:334] "Generic (PLEG): container finished" podID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerID="4cf47d02e83874585e6aa2dc72086299ea42bb5eaa1e6208d969a381f36e3229" exitCode=1 Jan 20 11:38:08 crc kubenswrapper[4725]: I0120 11:38:08.080675 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerDied","Data":"4cf47d02e83874585e6aa2dc72086299ea42bb5eaa1e6208d969a381f36e3229"} Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.346804 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.429587 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430196 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430283 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430351 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430429 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430513 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430556 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.437826 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w" (OuterVolumeSpecName: "kube-api-access-mz88w") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "kube-api-access-mz88w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.451906 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.452035 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.451969 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.452643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.453299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.455389 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532323 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532372 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532384 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532393 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532404 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532414 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532422 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:10 crc kubenswrapper[4725]: I0120 11:38:10.101608 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerDied","Data":"6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4"} Jan 20 11:38:10 crc kubenswrapper[4725]: I0120 11:38:10.101682 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:38:10 crc kubenswrapper[4725]: I0120 11:38:10.101688 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.035024 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z2qv6"] Jan 20 11:38:27 crc kubenswrapper[4725]: E0120 11:38:27.036695 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-collectd" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036726 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-collectd" Jan 20 11:38:27 crc kubenswrapper[4725]: E0120 11:38:27.036740 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-ceilometer" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036746 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-ceilometer" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036907 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-ceilometer" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036926 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-collectd" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.037838 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.041193 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.041566 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.042660 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.043075 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.043290 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.043552 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.055378 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z2qv6"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091532 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091629 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091659 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091709 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091821 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091927 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193224 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193363 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193393 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193436 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193522 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.194961 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.195024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.195967 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.196285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.196404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.197023 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.221713 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.382313 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.629709 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z2qv6"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.760677 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.764469 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.773324 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.805961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.806070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.806173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908092 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908205 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908305 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908805 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908931 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.930145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.102988 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.290005 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerStarted","Data":"cea8f28970afd85bcf9b5b2a1925c6b3a3bfaa0434a211aa929c29e6b55f4044"} Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.290520 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerStarted","Data":"75e63a488e52c52d2fda1015dcfb672de76425e3ba1b55bff85847b4bc5fcc5e"} Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.290535 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerStarted","Data":"9c0050128f3c4711577642b7280a796228af17f2d79b5330b1bbbed61094b001"} Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.329841 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" podStartSLOduration=1.329799927 podStartE2EDuration="1.329799927s" podCreationTimestamp="2026-01-20 11:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:38:28.322132773 +0000 UTC m=+2036.530454756" watchObservedRunningTime="2026-01-20 11:38:28.329799927 +0000 UTC m=+2036.538121900" Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.413591 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:28 crc kubenswrapper[4725]: W0120 11:38:28.419274 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6692404_540c_447d_9548_777d22a10598.slice/crio-06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056 WatchSource:0}: Error finding container 06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056: Status 404 returned error can't find the container with id 06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056 Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.301378 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6692404-540c-447d-9548-777d22a10598" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" exitCode=0 Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.301487 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff"} Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.301573 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerStarted","Data":"06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056"} Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.307057 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:38:31 crc kubenswrapper[4725]: I0120 11:38:31.320747 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerStarted","Data":"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588"} Jan 20 11:38:36 crc kubenswrapper[4725]: I0120 11:38:36.323298 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6692404-540c-447d-9548-777d22a10598" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" exitCode=0 Jan 20 11:38:36 crc kubenswrapper[4725]: I0120 11:38:36.323410 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588"} Jan 20 11:38:37 crc kubenswrapper[4725]: I0120 11:38:37.335968 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerStarted","Data":"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7"} Jan 20 11:38:37 crc kubenswrapper[4725]: I0120 11:38:37.367110 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qgltk" podStartSLOduration=2.878675688 podStartE2EDuration="10.367059968s" podCreationTimestamp="2026-01-20 11:38:27 +0000 UTC" firstStartedPulling="2026-01-20 11:38:29.306681978 +0000 UTC m=+2037.515003951" lastFinishedPulling="2026-01-20 11:38:36.795066258 +0000 UTC m=+2045.003388231" observedRunningTime="2026-01-20 11:38:37.358302112 +0000 UTC m=+2045.566624095" watchObservedRunningTime="2026-01-20 11:38:37.367059968 +0000 UTC m=+2045.575381961" Jan 20 11:38:38 crc kubenswrapper[4725]: I0120 11:38:38.103778 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:38 crc kubenswrapper[4725]: I0120 11:38:38.103977 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:39 crc kubenswrapper[4725]: I0120 11:38:39.154558 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qgltk" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" probeResult="failure" output=< Jan 20 11:38:39 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:38:39 crc kubenswrapper[4725]: > Jan 20 11:38:48 crc kubenswrapper[4725]: I0120 11:38:48.149621 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:48 crc kubenswrapper[4725]: I0120 11:38:48.195000 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:48 crc kubenswrapper[4725]: I0120 11:38:48.386777 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.431914 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qgltk" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" containerID="cri-o://068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" gracePeriod=2 Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.848111 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.961125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"f6692404-540c-447d-9548-777d22a10598\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.961194 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"f6692404-540c-447d-9548-777d22a10598\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.961241 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"f6692404-540c-447d-9548-777d22a10598\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.962765 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities" (OuterVolumeSpecName: "utilities") pod "f6692404-540c-447d-9548-777d22a10598" (UID: "f6692404-540c-447d-9548-777d22a10598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.968825 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl" (OuterVolumeSpecName: "kube-api-access-j5tnl") pod "f6692404-540c-447d-9548-777d22a10598" (UID: "f6692404-540c-447d-9548-777d22a10598"). InnerVolumeSpecName "kube-api-access-j5tnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.063914 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.064317 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.101410 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6692404-540c-447d-9548-777d22a10598" (UID: "f6692404-540c-447d-9548-777d22a10598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.166204 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445296 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6692404-540c-447d-9548-777d22a10598" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" exitCode=0 Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7"} Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445432 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056"} Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445465 4725 scope.go:117] "RemoveContainer" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.446693 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.473740 4725 scope.go:117] "RemoveContainer" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.489524 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.497596 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.514570 4725 scope.go:117] "RemoveContainer" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.543105 4725 scope.go:117] "RemoveContainer" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" Jan 20 11:38:50 crc kubenswrapper[4725]: E0120 11:38:50.544019 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7\": container with ID starting with 068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7 not found: ID does not exist" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544097 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7"} err="failed to get container status \"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7\": rpc error: code = NotFound desc = could not find container \"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7\": container with ID starting with 068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7 not found: ID does not exist" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544146 4725 scope.go:117] "RemoveContainer" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" Jan 20 11:38:50 crc kubenswrapper[4725]: E0120 11:38:50.544533 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588\": container with ID starting with da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588 not found: ID does not exist" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544559 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588"} err="failed to get container status \"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588\": rpc error: code = NotFound desc = could not find container \"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588\": container with ID starting with da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588 not found: ID does not exist" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544578 4725 scope.go:117] "RemoveContainer" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" Jan 20 11:38:50 crc kubenswrapper[4725]: E0120 11:38:50.544889 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff\": container with ID starting with 5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff not found: ID does not exist" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544918 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff"} err="failed to get container status \"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff\": rpc error: code = NotFound desc = could not find container \"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff\": container with ID starting with 5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff not found: ID does not exist" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.942772 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6692404-540c-447d-9548-777d22a10598" path="/var/lib/kubelet/pods/f6692404-540c-447d-9548-777d22a10598/volumes" Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.564253 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerID="cea8f28970afd85bcf9b5b2a1925c6b3a3bfaa0434a211aa929c29e6b55f4044" exitCode=1 Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.565220 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerID="75e63a488e52c52d2fda1015dcfb672de76425e3ba1b55bff85847b4bc5fcc5e" exitCode=1 Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.564336 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerDied","Data":"cea8f28970afd85bcf9b5b2a1925c6b3a3bfaa0434a211aa929c29e6b55f4044"} Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.565286 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerDied","Data":"75e63a488e52c52d2fda1015dcfb672de76425e3ba1b55bff85847b4bc5fcc5e"} Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.830819 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.867299 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.867362 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.867435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868516 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868625 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.875186 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb" (OuterVolumeSpecName: "kube-api-access-h62mb") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "kube-api-access-h62mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.891011 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.891289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.892599 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.892844 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.895751 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.896689 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970625 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970677 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970694 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970707 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970719 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970737 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970750 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:03 crc kubenswrapper[4725]: I0120 11:39:03.582215 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerDied","Data":"9c0050128f3c4711577642b7280a796228af17f2d79b5330b1bbbed61094b001"} Jan 20 11:39:03 crc kubenswrapper[4725]: I0120 11:39:03.582262 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:39:03 crc kubenswrapper[4725]: I0120 11:39:03.582285 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0050128f3c4711577642b7280a796228af17f2d79b5330b1bbbed61094b001" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.036728 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7l92d"] Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038004 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-ceilometer" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038022 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-ceilometer" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038034 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-content" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038041 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-content" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038054 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-collectd" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038061 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-collectd" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038072 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-utilities" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038824 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-utilities" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038839 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038846 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039013 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039032 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-collectd" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039042 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-ceilometer" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039997 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045896 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045978 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.046068 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045915 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045915 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.046287 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.056511 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7l92d"] Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101340 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101472 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101543 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101707 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.202885 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.202945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.202980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203001 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203029 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203056 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.204680 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205090 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205134 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205398 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.206272 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.234056 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.363441 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.640966 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7l92d"] Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.939222 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerStarted","Data":"d88ad7a804a14de5ca4d9912edcde828fc8a64fa321a09e08e421555b29df5a4"} Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.939788 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerStarted","Data":"3de7bd20454f36c3eb0175eedae39688ef0d7bed105bb1f393b399fcfe3733ca"} Jan 20 11:39:42 crc kubenswrapper[4725]: I0120 11:39:42.949955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerStarted","Data":"eb4c08870c8528b33be6d24bdbf794786b9e04c4abca7370b6a44ed218e39cd9"} Jan 20 11:39:42 crc kubenswrapper[4725]: I0120 11:39:42.994548 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-7l92d" podStartSLOduration=1.994512262 podStartE2EDuration="1.994512262s" podCreationTimestamp="2026-01-20 11:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:39:42.986066156 +0000 UTC m=+2111.194388139" watchObservedRunningTime="2026-01-20 11:39:42.994512262 +0000 UTC m=+2111.202834245" Jan 20 11:39:56 crc kubenswrapper[4725]: I0120 11:39:56.728299 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:39:56 crc kubenswrapper[4725]: I0120 11:39:56.729222 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283122 4725 generic.go:334] "Generic (PLEG): container finished" podID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerID="eb4c08870c8528b33be6d24bdbf794786b9e04c4abca7370b6a44ed218e39cd9" exitCode=0 Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283913 4725 generic.go:334] "Generic (PLEG): container finished" podID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerID="d88ad7a804a14de5ca4d9912edcde828fc8a64fa321a09e08e421555b29df5a4" exitCode=0 Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283196 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerDied","Data":"eb4c08870c8528b33be6d24bdbf794786b9e04c4abca7370b6a44ed218e39cd9"} Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerDied","Data":"d88ad7a804a14de5ca4d9912edcde828fc8a64fa321a09e08e421555b29df5a4"} Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.651016 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695029 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695233 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695319 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695377 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695427 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695463 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.696723 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.703146 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm" (OuterVolumeSpecName: "kube-api-access-dgnhm") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "kube-api-access-dgnhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.717043 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.718010 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.718529 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.719662 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.720038 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.721977 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798881 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798928 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798940 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798953 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798966 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798975 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798982 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:17 crc kubenswrapper[4725]: I0120 11:40:17.369950 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerDied","Data":"3de7bd20454f36c3eb0175eedae39688ef0d7bed105bb1f393b399fcfe3733ca"} Jan 20 11:40:17 crc kubenswrapper[4725]: I0120 11:40:17.370028 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de7bd20454f36c3eb0175eedae39688ef0d7bed105bb1f393b399fcfe3733ca" Jan 20 11:40:17 crc kubenswrapper[4725]: I0120 11:40:17.370098 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:40:18 crc kubenswrapper[4725]: I0120 11:40:18.360995 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-collectd/0.log" Jan 20 11:40:18 crc kubenswrapper[4725]: I0120 11:40:18.636503 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-ceilometer/0.log" Jan 20 11:40:18 crc kubenswrapper[4725]: I0120 11:40:18.902254 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-mqfr7_5b2eb85b-dd29-4dc6-9d02-1087e7119ae7/default-interconnect/0.log" Jan 20 11:40:19 crc kubenswrapper[4725]: I0120 11:40:19.212778 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/bridge/2.log" Jan 20 11:40:19 crc kubenswrapper[4725]: I0120 11:40:19.473223 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/sg-core/0.log" Jan 20 11:40:19 crc kubenswrapper[4725]: I0120 11:40:19.791992 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/bridge/2.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.101772 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/sg-core/0.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.383038 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/bridge/2.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.678133 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/sg-core/0.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.963680 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/bridge/2.log" Jan 20 11:40:21 crc kubenswrapper[4725]: I0120 11:40:21.263522 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/sg-core/0.log" Jan 20 11:40:21 crc kubenswrapper[4725]: I0120 11:40:21.534643 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/bridge/2.log" Jan 20 11:40:21 crc kubenswrapper[4725]: I0120 11:40:21.771413 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/sg-core/0.log" Jan 20 11:40:25 crc kubenswrapper[4725]: I0120 11:40:25.395699 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d4f8cb59-xtrqk_288c5de6-7288-478c-b790-1f348c4827f4/operator/0.log" Jan 20 11:40:25 crc kubenswrapper[4725]: I0120 11:40:25.671135 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/prometheus/0.log" Jan 20 11:40:25 crc kubenswrapper[4725]: I0120 11:40:25.951302 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elasticsearch/0.log" Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.202916 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.518756 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/alertmanager/0.log" Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.728283 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.728369 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:40:44 crc kubenswrapper[4725]: I0120 11:40:44.728244 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-9d4584887-5t9dx_653691a1-9088-47bd-97e2-4d2f17f885bf/operator/0.log" Jan 20 11:40:48 crc kubenswrapper[4725]: I0120 11:40:48.236684 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d4f8cb59-xtrqk_288c5de6-7288-478c-b790-1f348c4827f4/operator/0.log" Jan 20 11:40:48 crc kubenswrapper[4725]: I0120 11:40:48.545067 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_879163eb-1e0f-4030-aec9-69331c2e5ecd/qdr/0.log" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.727992 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.728900 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.728963 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.729874 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.729946 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da" gracePeriod=600 Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.759156 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da" exitCode=0 Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.759226 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da"} Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.760110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16"} Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.760144 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.122309 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vh25r/must-gather-k86g8"] Jan 20 11:41:13 crc kubenswrapper[4725]: E0120 11:41:13.125064 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-ceilometer" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125190 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-ceilometer" Jan 20 11:41:13 crc kubenswrapper[4725]: E0120 11:41:13.125280 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-collectd" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125351 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-collectd" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125575 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-collectd" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125651 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-ceilometer" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.126659 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.129967 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vh25r"/"openshift-service-ca.crt" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.131354 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vh25r"/"kube-root-ca.crt" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.141522 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vh25r/must-gather-k86g8"] Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.287184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrg7c\" (UniqueName: \"kubernetes.io/projected/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-kube-api-access-mrg7c\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.287818 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-must-gather-output\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.390012 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrg7c\" (UniqueName: \"kubernetes.io/projected/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-kube-api-access-mrg7c\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.390111 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-must-gather-output\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.390667 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-must-gather-output\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.431152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrg7c\" (UniqueName: \"kubernetes.io/projected/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-kube-api-access-mrg7c\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.449035 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.928841 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vh25r/must-gather-k86g8"] Jan 20 11:41:13 crc kubenswrapper[4725]: W0120 11:41:13.935024 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44435c2f_00ef_4c8f_88f3_ff2e79476ff1.slice/crio-e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22 WatchSource:0}: Error finding container e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22: Status 404 returned error can't find the container with id e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22 Jan 20 11:41:14 crc kubenswrapper[4725]: I0120 11:41:14.919440 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vh25r/must-gather-k86g8" event={"ID":"44435c2f-00ef-4c8f-88f3-ff2e79476ff1","Type":"ContainerStarted","Data":"e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22"} Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.595264 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.597981 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.606223 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.691870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.691948 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.692001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.794126 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.794225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.794296 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.795580 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.795923 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.821568 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.941288 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.387447 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.391450 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.394558 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.418739 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"infrawatch-operators-jkmc4\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.520905 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"infrawatch-operators-jkmc4\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.595662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"infrawatch-operators-jkmc4\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.728241 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:23 crc kubenswrapper[4725]: I0120 11:41:22.999510 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:23 crc kubenswrapper[4725]: I0120 11:41:23.015477 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vh25r/must-gather-k86g8" event={"ID":"44435c2f-00ef-4c8f-88f3-ff2e79476ff1","Type":"ContainerStarted","Data":"8a41cc5c48cfc65ba6796c1be7ac535542f3f32d228f563264b307dc10ebe1c3"} Jan 20 11:41:23 crc kubenswrapper[4725]: I0120 11:41:23.049794 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:23 crc kubenswrapper[4725]: W0120 11:41:23.057331 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a63a6c_5e81_4cb6_8c56_ee0673d781fa.slice/crio-2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358 WatchSource:0}: Error finding container 2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358: Status 404 returned error can't find the container with id 2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358 Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.027708 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vh25r/must-gather-k86g8" event={"ID":"44435c2f-00ef-4c8f-88f3-ff2e79476ff1","Type":"ContainerStarted","Data":"0d57c69c9dd782acdf37233eaaaa9cc500fb981a71b9acbd961994f813858120"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.030015 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerStarted","Data":"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.030107 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerStarted","Data":"2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.032466 4725 generic.go:334] "Generic (PLEG): container finished" podID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" exitCode=0 Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.032515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.032550 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerStarted","Data":"62fa7c0461e72580f67e750e5a76c501b04839f3e34ae343fd69306f9db9dd66"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.059873 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vh25r/must-gather-k86g8" podStartSLOduration=2.685537227 podStartE2EDuration="11.059839031s" podCreationTimestamp="2026-01-20 11:41:13 +0000 UTC" firstStartedPulling="2026-01-20 11:41:13.937625766 +0000 UTC m=+2202.145947739" lastFinishedPulling="2026-01-20 11:41:22.31192757 +0000 UTC m=+2210.520249543" observedRunningTime="2026-01-20 11:41:24.052030845 +0000 UTC m=+2212.260352848" watchObservedRunningTime="2026-01-20 11:41:24.059839031 +0000 UTC m=+2212.268161004" Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.097558 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-jkmc4" podStartSLOduration=2.9462605269999997 podStartE2EDuration="3.097531511s" podCreationTimestamp="2026-01-20 11:41:21 +0000 UTC" firstStartedPulling="2026-01-20 11:41:23.063588161 +0000 UTC m=+2211.271910134" lastFinishedPulling="2026-01-20 11:41:23.214859145 +0000 UTC m=+2211.423181118" observedRunningTime="2026-01-20 11:41:24.089894469 +0000 UTC m=+2212.298216442" watchObservedRunningTime="2026-01-20 11:41:24.097531511 +0000 UTC m=+2212.305853474" Jan 20 11:41:26 crc kubenswrapper[4725]: I0120 11:41:26.054890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerStarted","Data":"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877"} Jan 20 11:41:27 crc kubenswrapper[4725]: I0120 11:41:27.085025 4725 generic.go:334] "Generic (PLEG): container finished" podID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" exitCode=0 Jan 20 11:41:27 crc kubenswrapper[4725]: I0120 11:41:27.085454 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877"} Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.111527 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerStarted","Data":"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f"} Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.140976 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pf4tp" podStartSLOduration=6.235501384 podStartE2EDuration="10.140948985s" podCreationTimestamp="2026-01-20 11:41:19 +0000 UTC" firstStartedPulling="2026-01-20 11:41:24.034294615 +0000 UTC m=+2212.242616588" lastFinishedPulling="2026-01-20 11:41:27.939742216 +0000 UTC m=+2216.148064189" observedRunningTime="2026-01-20 11:41:29.136907578 +0000 UTC m=+2217.345229551" watchObservedRunningTime="2026-01-20 11:41:29.140948985 +0000 UTC m=+2217.349270968" Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.941898 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.941968 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:30 crc kubenswrapper[4725]: I0120 11:41:30.991894 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pf4tp" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" probeResult="failure" output=< Jan 20 11:41:30 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:41:30 crc kubenswrapper[4725]: > Jan 20 11:41:31 crc kubenswrapper[4725]: I0120 11:41:31.728702 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:31 crc kubenswrapper[4725]: I0120 11:41:31.728779 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:31 crc kubenswrapper[4725]: I0120 11:41:31.765805 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:32 crc kubenswrapper[4725]: I0120 11:41:32.166695 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.341430 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.343633 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-jkmc4" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" containerID="cri-o://0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" gracePeriod=2 Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.765983 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.826603 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.835063 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5" (OuterVolumeSpecName: "kube-api-access-vb4p5") pod "92a63a6c-5e81-4cb6-8c56-ee0673d781fa" (UID: "92a63a6c-5e81-4cb6-8c56-ee0673d781fa"). InnerVolumeSpecName "kube-api-access-vb4p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.928332 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337000 4725 generic.go:334] "Generic (PLEG): container finished" podID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" exitCode=0 Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337067 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337061 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerDied","Data":"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30"} Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337137 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerDied","Data":"2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358"} Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337167 4725 scope.go:117] "RemoveContainer" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.364162 4725 scope.go:117] "RemoveContainer" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" Jan 20 11:41:36 crc kubenswrapper[4725]: E0120 11:41:36.364961 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30\": container with ID starting with 0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30 not found: ID does not exist" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.365002 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30"} err="failed to get container status \"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30\": rpc error: code = NotFound desc = could not find container \"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30\": container with ID starting with 0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30 not found: ID does not exist" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.376355 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.383209 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.941776 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" path="/var/lib/kubelet/pods/92a63a6c-5e81-4cb6-8c56-ee0673d781fa/volumes" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.161413 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sh5db_b07c5d50-bb91-412d-b86a-3d736a16a81d/control-plane-machine-set-operator/0.log" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.183486 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/kube-rbac-proxy/0.log" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.193549 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/machine-api-operator/0.log" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.993150 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:40 crc kubenswrapper[4725]: I0120 11:41:40.044671 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.362584 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.363362 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pf4tp" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" containerID="cri-o://fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" gracePeriod=2 Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.773276 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.864271 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"e27af684-a552-4b4d-ab63-82b662b0dad7\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.864416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"e27af684-a552-4b4d-ab63-82b662b0dad7\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.864453 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"e27af684-a552-4b4d-ab63-82b662b0dad7\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.866291 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities" (OuterVolumeSpecName: "utilities") pod "e27af684-a552-4b4d-ab63-82b662b0dad7" (UID: "e27af684-a552-4b4d-ab63-82b662b0dad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.873237 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t" (OuterVolumeSpecName: "kube-api-access-k599t") pod "e27af684-a552-4b4d-ab63-82b662b0dad7" (UID: "e27af684-a552-4b4d-ab63-82b662b0dad7"). InnerVolumeSpecName "kube-api-access-k599t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.922022 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e27af684-a552-4b4d-ab63-82b662b0dad7" (UID: "e27af684-a552-4b4d-ab63-82b662b0dad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.966288 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.966656 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.966758 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403435 4725 generic.go:334] "Generic (PLEG): container finished" podID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" exitCode=0 Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403508 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f"} Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403559 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"62fa7c0461e72580f67e750e5a76c501b04839f3e34ae343fd69306f9db9dd66"} Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403582 4725 scope.go:117] "RemoveContainer" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403577 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.434229 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.443336 4725 scope.go:117] "RemoveContainer" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.448438 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.467538 4725 scope.go:117] "RemoveContainer" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.500448 4725 scope.go:117] "RemoveContainer" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" Jan 20 11:41:43 crc kubenswrapper[4725]: E0120 11:41:43.501620 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f\": container with ID starting with fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f not found: ID does not exist" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.501699 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f"} err="failed to get container status \"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f\": rpc error: code = NotFound desc = could not find container \"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f\": container with ID starting with fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f not found: ID does not exist" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.501748 4725 scope.go:117] "RemoveContainer" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" Jan 20 11:41:43 crc kubenswrapper[4725]: E0120 11:41:43.502312 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877\": container with ID starting with 1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877 not found: ID does not exist" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.502360 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877"} err="failed to get container status \"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877\": rpc error: code = NotFound desc = could not find container \"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877\": container with ID starting with 1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877 not found: ID does not exist" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.502385 4725 scope.go:117] "RemoveContainer" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" Jan 20 11:41:43 crc kubenswrapper[4725]: E0120 11:41:43.502802 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074\": container with ID starting with 7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074 not found: ID does not exist" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.502828 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074"} err="failed to get container status \"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074\": rpc error: code = NotFound desc = could not find container \"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074\": container with ID starting with 7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074 not found: ID does not exist" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.646485 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-8pwdf_f31ab59c-7288-4ebb-82b4-daa77ec5319c/cert-manager-controller/0.log" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.665089 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2m9v2_62554d79-c9bb-4b40-9153-989791392664/cert-manager-cainjector/0.log" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.680951 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bxlks_8b639e20-8ca7-4b37-8271-ada2858140b9/cert-manager-webhook/0.log" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.942678 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" path="/var/lib/kubelet/pods/e27af684-a552-4b4d-ab63-82b662b0dad7/volumes" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.266927 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sl5rg_0bc9f0db-ee2d-43d3-8fc7-66f2b155c710/prometheus-operator/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.279945 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_05acb89f-79ef-4e5a-8713-af3abbf86d5a/prometheus-operator-admission-webhook/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.299182 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_a5d78053-6a08-448a-93ca-1c0e2334617a/prometheus-operator-admission-webhook/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.322683 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cjnzp_ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002/operator/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.336844 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckz5m_5a2dcc7a-6d62-412d-a25f-fea592c85bf5/perses-operator/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.822992 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd_10d53364-23ca-4726-bed9-460fb6763fa1/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.833897 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd_10d53364-23ca-4726-bed9-460fb6763fa1/util/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.881505 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd_10d53364-23ca-4726-bed9-460fb6763fa1/pull/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.893428 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk_484dd827-7fd5-4cbc-878f-400b31b6179c/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.904718 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk_484dd827-7fd5-4cbc-878f-400b31b6179c/util/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.921481 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk_484dd827-7fd5-4cbc-878f-400b31b6179c/pull/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.935449 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms_ea19653a-0b47-400b-bcce-8034cb7f6d55/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.946660 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms_ea19653a-0b47-400b-bcce-8034cb7f6d55/util/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.957276 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms_ea19653a-0b47-400b-bcce-8034cb7f6d55/pull/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.981300 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm_418d6042-ac1e-433e-a820-04d774775787/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.991240 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm_418d6042-ac1e-433e-a820-04d774775787/util/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.003159 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm_418d6042-ac1e-433e-a820-04d774775787/pull/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.475214 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6dzml_e1530fd1-1850-4d4f-b6a7-cc1784d9c399/registry-server/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.482237 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6dzml_e1530fd1-1850-4d4f-b6a7-cc1784d9c399/extract-utilities/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.497498 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6dzml_e1530fd1-1850-4d4f-b6a7-cc1784d9c399/extract-content/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.150366 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hm4k5_da38c2a2-fb87-4115-ac25-0256bee850ae/registry-server/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.157894 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hm4k5_da38c2a2-fb87-4115-ac25-0256bee850ae/extract-utilities/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.168710 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hm4k5_da38c2a2-fb87-4115-ac25-0256bee850ae/extract-content/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.191414 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-htj9r_5666b0dd-5364-4bee-a091-26fa796770cf/marketplace-operator/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.585057 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hht7w_2c4020a9-4953-4dee-8bc0-2329493c8b8a/registry-server/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.590715 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hht7w_2c4020a9-4953-4dee-8bc0-2329493c8b8a/extract-utilities/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.600038 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hht7w_2c4020a9-4953-4dee-8bc0-2329493c8b8a/extract-content/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.432589 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sl5rg_0bc9f0db-ee2d-43d3-8fc7-66f2b155c710/prometheus-operator/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.449440 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_05acb89f-79ef-4e5a-8713-af3abbf86d5a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.464474 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_a5d78053-6a08-448a-93ca-1c0e2334617a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.483209 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cjnzp_ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002/operator/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.508999 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckz5m_5a2dcc7a-6d62-412d-a25f-fea592c85bf5/perses-operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.781149 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sl5rg_0bc9f0db-ee2d-43d3-8fc7-66f2b155c710/prometheus-operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.798557 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_05acb89f-79ef-4e5a-8713-af3abbf86d5a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.815928 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_a5d78053-6a08-448a-93ca-1c0e2334617a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.837412 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cjnzp_ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002/operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.855779 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckz5m_5a2dcc7a-6d62-412d-a25f-fea592c85bf5/perses-operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.971900 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-8pwdf_f31ab59c-7288-4ebb-82b4-daa77ec5319c/cert-manager-controller/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.984872 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2m9v2_62554d79-c9bb-4b40-9153-989791392664/cert-manager-cainjector/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.002177 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bxlks_8b639e20-8ca7-4b37-8271-ada2858140b9/cert-manager-webhook/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.521989 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-8pwdf_f31ab59c-7288-4ebb-82b4-daa77ec5319c/cert-manager-controller/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.536520 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2m9v2_62554d79-c9bb-4b40-9153-989791392664/cert-manager-cainjector/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.549438 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bxlks_8b639e20-8ca7-4b37-8271-ada2858140b9/cert-manager-webhook/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.052927 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sh5db_b07c5d50-bb91-412d-b86a-3d736a16a81d/control-plane-machine-set-operator/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.069530 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/kube-rbac-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.078990 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/machine-api-operator/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.668823 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75_34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83/extract/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.679359 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75_34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83/util/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.688498 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75_34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83/pull/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.701433 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4_6c49be43-a86b-4475-8bd3-a1105dd19ad1/extract/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.708762 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4_6c49be43-a86b-4475-8bd3-a1105dd19ad1/util/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.717686 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4_6c49be43-a86b-4475-8bd3-a1105dd19ad1/pull/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.733840 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/alertmanager/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.741930 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/config-reloader/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.748754 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.757123 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/init-config-reloader/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.771445 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_650f5183-3a46-4da1-befe-a96b43c85a6e/curl/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.782137 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.782799 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.789446 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.803819 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.811058 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.811118 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.815911 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.826872 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.827023 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.832300 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.845629 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.854547 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.854829 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.860427 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.871452 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.880270 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.880345 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.886921 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.906821 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-mqfr7_5b2eb85b-dd29-4dc6-9d02-1087e7119ae7/default-interconnect/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.917988 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.949759 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-6886c99b94-tzbc7_ce11e344-b219-4b22-b05b-a21b78fc7d98/manager/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.972115 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elasticsearch/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.981132 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elastic-internal-init-filesystem/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.987600 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elastic-internal-suspend/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.001859 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_infrawatch-operators-4fmg5_514d6114-a2ee-4a88-9798-9a27066ed03a/registry-server/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.015674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-5bb49f789d-7p9dr_a923dc59-d518-4ee4-a92c-1bb5ad6e7158/interconnect-operator/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.035104 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/prometheus/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.041677 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/config-reloader/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.050230 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/oauth-proxy/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.058739 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/init-config-reloader/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.103551 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.110727 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.121488 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.136912 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_879163eb-1e0f-4030-aec9-69331c2e5ecd/qdr/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.151733 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.170354 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.180855 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.245623 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.254937 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.267844 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.531424 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-9d4584887-5t9dx_653691a1-9088-47bd-97e2-4d2f17f885bf/operator/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.549255 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.556034 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.566672 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.634394 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.646474 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.659158 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.726392 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.732835 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.740881 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.798995 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.806936 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.816693 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/manage-dockerfile/0.log" Jan 20 11:42:16 crc kubenswrapper[4725]: I0120 11:42:16.995836 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d4f8cb59-xtrqk_288c5de6-7288-478c-b790-1f348c4827f4/operator/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.013694 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/docker-build/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.024629 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/git-clone/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.031538 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/manage-dockerfile/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.053722 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.060430 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-ceilometer/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.082017 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-n5jwb_98772f19-fcd3-4ee3-91e7-aa87154c3c50/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.088762 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-n5jwb_98772f19-fcd3-4ee3-91e7-aa87154c3c50/smoketest-ceilometer/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.108432 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-phjxw_3e274138-1522-41f2-8021-9f425af23d2e/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.117018 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-phjxw_3e274138-1522-41f2-8021-9f425af23d2e/smoketest-ceilometer/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.136667 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-z2qv6_6c81226d-b3a8-4f68-8c87-b32fe8ae7901/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.144413 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-z2qv6_6c81226d-b3a8-4f68-8c87-b32fe8ae7901/smoketest-ceilometer/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.636889 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/kube-multus-additional-cni-plugins/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.648310 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/egress-router-binary-copy/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.656641 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/cni-plugins/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.668452 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/bond-cni-plugin/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.677810 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/routeoverride-cni/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.687232 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/whereabouts-cni-bincopy/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.700280 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/whereabouts-cni/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.716674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-7j2sn_eca1f8da-59f2-404e-a5e0-dbe1a191b885/multus-admission-controller/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.730829 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-7j2sn_eca1f8da-59f2-404e-a5e0-dbe1a191b885/kube-rbac-proxy/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.773626 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/3.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.783793 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.810789 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5lfc4_a5d55efc-e85a-4a02-a4ce-7355df9fea66/network-metrics-daemon/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.817023 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5lfc4_a5d55efc-e85a-4a02-a4ce-7355df9fea66/kube-rbac-proxy/0.log" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.625194 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626449 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626470 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626491 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626500 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626516 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-utilities" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626524 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-utilities" Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626545 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-content" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626551 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-content" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626700 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626718 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.627991 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.649148 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.670810 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.670883 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.671162 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.772906 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.773017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.773173 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.774302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.774499 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.804457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.947419 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:45 crc kubenswrapper[4725]: I0120 11:42:45.212744 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:46 crc kubenswrapper[4725]: I0120 11:42:46.028274 4725 generic.go:334] "Generic (PLEG): container finished" podID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" exitCode=0 Jan 20 11:42:46 crc kubenswrapper[4725]: I0120 11:42:46.028333 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0"} Jan 20 11:42:46 crc kubenswrapper[4725]: I0120 11:42:46.028691 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerStarted","Data":"08307e5000e1200390aae9cca49768e312d87b34a14dc48a1fcbef24ca6e7152"} Jan 20 11:42:48 crc kubenswrapper[4725]: I0120 11:42:48.057890 4725 generic.go:334] "Generic (PLEG): container finished" podID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" exitCode=0 Jan 20 11:42:48 crc kubenswrapper[4725]: I0120 11:42:48.057999 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc"} Jan 20 11:42:49 crc kubenswrapper[4725]: I0120 11:42:49.071580 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerStarted","Data":"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28"} Jan 20 11:42:49 crc kubenswrapper[4725]: I0120 11:42:49.098275 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ksjd9" podStartSLOduration=2.421850446 podStartE2EDuration="5.098204919s" podCreationTimestamp="2026-01-20 11:42:44 +0000 UTC" firstStartedPulling="2026-01-20 11:42:46.032057814 +0000 UTC m=+2294.240379787" lastFinishedPulling="2026-01-20 11:42:48.708412287 +0000 UTC m=+2296.916734260" observedRunningTime="2026-01-20 11:42:49.092034264 +0000 UTC m=+2297.300356257" watchObservedRunningTime="2026-01-20 11:42:49.098204919 +0000 UTC m=+2297.306526892" Jan 20 11:42:54 crc kubenswrapper[4725]: I0120 11:42:54.947643 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:54 crc kubenswrapper[4725]: I0120 11:42:54.948771 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:55 crc kubenswrapper[4725]: I0120 11:42:55.008266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:55 crc kubenswrapper[4725]: I0120 11:42:55.175466 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:55 crc kubenswrapper[4725]: I0120 11:42:55.253065 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.144156 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ksjd9" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" containerID="cri-o://3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" gracePeriod=2 Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.610713 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.646527 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"42b144f9-6444-48d2-8e34-ee4ab42f3221\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.646650 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"42b144f9-6444-48d2-8e34-ee4ab42f3221\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.646711 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"42b144f9-6444-48d2-8e34-ee4ab42f3221\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.648426 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities" (OuterVolumeSpecName: "utilities") pod "42b144f9-6444-48d2-8e34-ee4ab42f3221" (UID: "42b144f9-6444-48d2-8e34-ee4ab42f3221"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.655418 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb" (OuterVolumeSpecName: "kube-api-access-n2xbb") pod "42b144f9-6444-48d2-8e34-ee4ab42f3221" (UID: "42b144f9-6444-48d2-8e34-ee4ab42f3221"). InnerVolumeSpecName "kube-api-access-n2xbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.703745 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42b144f9-6444-48d2-8e34-ee4ab42f3221" (UID: "42b144f9-6444-48d2-8e34-ee4ab42f3221"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.748281 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") on node \"crc\" DevicePath \"\"" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.748331 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.748345 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.250550 4725 generic.go:334] "Generic (PLEG): container finished" podID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" exitCode=0 Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.251465 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28"} Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.251627 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"08307e5000e1200390aae9cca49768e312d87b34a14dc48a1fcbef24ca6e7152"} Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.251731 4725 scope.go:117] "RemoveContainer" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.252146 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.274292 4725 scope.go:117] "RemoveContainer" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.299748 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.308209 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.311580 4725 scope.go:117] "RemoveContainer" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.332171 4725 scope.go:117] "RemoveContainer" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" Jan 20 11:42:58 crc kubenswrapper[4725]: E0120 11:42:58.333006 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28\": container with ID starting with 3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28 not found: ID does not exist" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.333126 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28"} err="failed to get container status \"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28\": rpc error: code = NotFound desc = could not find container \"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28\": container with ID starting with 3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28 not found: ID does not exist" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.333189 4725 scope.go:117] "RemoveContainer" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" Jan 20 11:42:58 crc kubenswrapper[4725]: E0120 11:42:58.334242 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc\": container with ID starting with ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc not found: ID does not exist" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.334272 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc"} err="failed to get container status \"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc\": rpc error: code = NotFound desc = could not find container \"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc\": container with ID starting with ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc not found: ID does not exist" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.334299 4725 scope.go:117] "RemoveContainer" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" Jan 20 11:42:58 crc kubenswrapper[4725]: E0120 11:42:58.334596 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0\": container with ID starting with a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0 not found: ID does not exist" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.334623 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0"} err="failed to get container status \"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0\": rpc error: code = NotFound desc = could not find container \"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0\": container with ID starting with a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0 not found: ID does not exist" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.950381 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" path="/var/lib/kubelet/pods/42b144f9-6444-48d2-8e34-ee4ab42f3221/volumes" Jan 20 11:43:26 crc kubenswrapper[4725]: I0120 11:43:26.728229 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:43:26 crc kubenswrapper[4725]: I0120 11:43:26.730306 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:43:56 crc kubenswrapper[4725]: I0120 11:43:56.727950 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:43:56 crc kubenswrapper[4725]: I0120 11:43:56.728968 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.727529 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.728442 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.728516 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.729492 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.729562 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" gracePeriod=600 Jan 20 11:44:27 crc kubenswrapper[4725]: I0120 11:44:27.084914 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" exitCode=0 Jan 20 11:44:27 crc kubenswrapper[4725]: I0120 11:44:27.084993 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16"} Jan 20 11:44:27 crc kubenswrapper[4725]: I0120 11:44:27.085554 4725 scope.go:117] "RemoveContainer" containerID="292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da" Jan 20 11:44:27 crc kubenswrapper[4725]: E0120 11:44:27.526840 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:44:28 crc kubenswrapper[4725]: I0120 11:44:28.099899 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:44:28 crc kubenswrapper[4725]: E0120 11:44:28.100219 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:44:39 crc kubenswrapper[4725]: I0120 11:44:39.932469 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:44:39 crc kubenswrapper[4725]: E0120 11:44:39.933144 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:44:54 crc kubenswrapper[4725]: I0120 11:44:54.932653 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:44:54 crc kubenswrapper[4725]: E0120 11:44:54.933958 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.390476 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx"] Jan 20 11:45:00 crc kubenswrapper[4725]: E0120 11:45:00.395332 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-content" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395376 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-content" Jan 20 11:45:00 crc kubenswrapper[4725]: E0120 11:45:00.395396 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-utilities" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395404 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-utilities" Jan 20 11:45:00 crc kubenswrapper[4725]: E0120 11:45:00.395425 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395432 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395591 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.404672 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.405013 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx"] Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.408579 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.409291 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.588062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.588433 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.588560 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.690115 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.690225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.690274 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.691270 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.700902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.721844 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.737578 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.035454 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx"] Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.304398 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerStarted","Data":"29f5fb382a65157c4331129f6528e3f9e62bf727870488a755d44c354a4f9892"} Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.304509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerStarted","Data":"6ece4f6fab7495ec98fb9171574deaf28dccb122b438616bc7f6a16567a70ea3"} Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.328473 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" podStartSLOduration=1.3284469269999999 podStartE2EDuration="1.328446927s" podCreationTimestamp="2026-01-20 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:45:01.327127515 +0000 UTC m=+2429.535449488" watchObservedRunningTime="2026-01-20 11:45:01.328446927 +0000 UTC m=+2429.536768900" Jan 20 11:45:02 crc kubenswrapper[4725]: I0120 11:45:02.313399 4725 generic.go:334] "Generic (PLEG): container finished" podID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerID="29f5fb382a65157c4331129f6528e3f9e62bf727870488a755d44c354a4f9892" exitCode=0 Jan 20 11:45:02 crc kubenswrapper[4725]: I0120 11:45:02.313660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerDied","Data":"29f5fb382a65157c4331129f6528e3f9e62bf727870488a755d44c354a4f9892"} Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.583428 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.720343 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"5b706901-8a1e-4f91-988f-0f295b512b2b\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.720456 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"5b706901-8a1e-4f91-988f-0f295b512b2b\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.720706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"5b706901-8a1e-4f91-988f-0f295b512b2b\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.721834 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b706901-8a1e-4f91-988f-0f295b512b2b" (UID: "5b706901-8a1e-4f91-988f-0f295b512b2b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.722484 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.728223 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b706901-8a1e-4f91-988f-0f295b512b2b" (UID: "5b706901-8a1e-4f91-988f-0f295b512b2b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.730497 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd" (OuterVolumeSpecName: "kube-api-access-rjlfd") pod "5b706901-8a1e-4f91-988f-0f295b512b2b" (UID: "5b706901-8a1e-4f91-988f-0f295b512b2b"). InnerVolumeSpecName "kube-api-access-rjlfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.824301 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") on node \"crc\" DevicePath \"\"" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.824363 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.452959 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerDied","Data":"6ece4f6fab7495ec98fb9171574deaf28dccb122b438616bc7f6a16567a70ea3"} Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.453733 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ece4f6fab7495ec98fb9171574deaf28dccb122b438616bc7f6a16567a70ea3" Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.453061 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.674755 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.681436 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.944343 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" path="/var/lib/kubelet/pods/e2d56c6e-b9ad-4de9-8fe6-06b00293050e/volumes" Jan 20 11:45:08 crc kubenswrapper[4725]: I0120 11:45:08.932932 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:08 crc kubenswrapper[4725]: E0120 11:45:08.935610 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:17 crc kubenswrapper[4725]: I0120 11:45:17.018317 4725 scope.go:117] "RemoveContainer" containerID="e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0" Jan 20 11:45:19 crc kubenswrapper[4725]: I0120 11:45:19.932635 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:19 crc kubenswrapper[4725]: E0120 11:45:19.933662 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:33 crc kubenswrapper[4725]: I0120 11:45:33.932579 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:33 crc kubenswrapper[4725]: E0120 11:45:33.933827 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:47 crc kubenswrapper[4725]: I0120 11:45:47.932231 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:47 crc kubenswrapper[4725]: E0120 11:45:47.933355 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:59 crc kubenswrapper[4725]: I0120 11:45:59.933045 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:59 crc kubenswrapper[4725]: E0120 11:45:59.934164 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:12 crc kubenswrapper[4725]: I0120 11:46:12.938475 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:12 crc kubenswrapper[4725]: E0120 11:46:12.939737 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:27 crc kubenswrapper[4725]: I0120 11:46:27.932996 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:27 crc kubenswrapper[4725]: E0120 11:46:27.934098 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:38 crc kubenswrapper[4725]: I0120 11:46:38.933990 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:38 crc kubenswrapper[4725]: E0120 11:46:38.935132 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:52 crc kubenswrapper[4725]: I0120 11:46:52.941847 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:52 crc kubenswrapper[4725]: E0120 11:46:52.945465 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:04 crc kubenswrapper[4725]: I0120 11:47:04.940600 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:04 crc kubenswrapper[4725]: E0120 11:47:04.943307 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:17 crc kubenswrapper[4725]: I0120 11:47:17.932270 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:17 crc kubenswrapper[4725]: E0120 11:47:17.933194 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.130329 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:21 crc kubenswrapper[4725]: E0120 11:47:21.131291 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerName="collect-profiles" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.131310 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerName="collect-profiles" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.131494 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerName="collect-profiles" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.133026 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.161128 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.176879 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"infrawatch-operators-h4d72\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.278437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"infrawatch-operators-h4d72\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.309575 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"infrawatch-operators-h4d72\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.463780 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.744936 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.762200 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:47:22 crc kubenswrapper[4725]: I0120 11:47:22.743725 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerStarted","Data":"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5"} Jan 20 11:47:22 crc kubenswrapper[4725]: I0120 11:47:22.744239 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerStarted","Data":"1ec9f71e1cb0c4d069c12c7836b2eea740de9592c2750e3aac3ee699298c3f0c"} Jan 20 11:47:22 crc kubenswrapper[4725]: I0120 11:47:22.765128 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-h4d72" podStartSLOduration=1.6058907059999998 podStartE2EDuration="1.765066742s" podCreationTimestamp="2026-01-20 11:47:21 +0000 UTC" firstStartedPulling="2026-01-20 11:47:21.761765136 +0000 UTC m=+2569.970087109" lastFinishedPulling="2026-01-20 11:47:21.920941172 +0000 UTC m=+2570.129263145" observedRunningTime="2026-01-20 11:47:22.76247466 +0000 UTC m=+2570.970796633" watchObservedRunningTime="2026-01-20 11:47:22.765066742 +0000 UTC m=+2570.973388715" Jan 20 11:47:28 crc kubenswrapper[4725]: I0120 11:47:28.933352 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:28 crc kubenswrapper[4725]: E0120 11:47:28.934481 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.464653 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.464769 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.507244 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.866429 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.920137 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:33 crc kubenswrapper[4725]: I0120 11:47:33.836419 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-h4d72" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" containerID="cri-o://35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" gracePeriod=2 Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.345969 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.525608 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"134d5e80-3994-4b7d-9680-4bac160108e3\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.533034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs" (OuterVolumeSpecName: "kube-api-access-cjfhs") pod "134d5e80-3994-4b7d-9680-4bac160108e3" (UID: "134d5e80-3994-4b7d-9680-4bac160108e3"). InnerVolumeSpecName "kube-api-access-cjfhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.628694 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") on node \"crc\" DevicePath \"\"" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.846399 4725 generic.go:334] "Generic (PLEG): container finished" podID="134d5e80-3994-4b7d-9680-4bac160108e3" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" exitCode=0 Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.846478 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.846549 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerDied","Data":"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5"} Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.848437 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerDied","Data":"1ec9f71e1cb0c4d069c12c7836b2eea740de9592c2750e3aac3ee699298c3f0c"} Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.848472 4725 scope.go:117] "RemoveContainer" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.873893 4725 scope.go:117] "RemoveContainer" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" Jan 20 11:47:34 crc kubenswrapper[4725]: E0120 11:47:34.874669 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5\": container with ID starting with 35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5 not found: ID does not exist" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.874804 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5"} err="failed to get container status \"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5\": rpc error: code = NotFound desc = could not find container \"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5\": container with ID starting with 35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5 not found: ID does not exist" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.892471 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.902566 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.942336 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" path="/var/lib/kubelet/pods/134d5e80-3994-4b7d-9680-4bac160108e3/volumes" Jan 20 11:47:41 crc kubenswrapper[4725]: I0120 11:47:41.932722 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:41 crc kubenswrapper[4725]: E0120 11:47:41.934170 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:53 crc kubenswrapper[4725]: I0120 11:47:53.932420 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:53 crc kubenswrapper[4725]: E0120 11:47:53.933588 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:07 crc kubenswrapper[4725]: I0120 11:48:07.933189 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:07 crc kubenswrapper[4725]: E0120 11:48:07.934516 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:21 crc kubenswrapper[4725]: I0120 11:48:21.932927 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:21 crc kubenswrapper[4725]: E0120 11:48:21.934554 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:32 crc kubenswrapper[4725]: I0120 11:48:32.952016 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:32 crc kubenswrapper[4725]: E0120 11:48:32.953491 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:44 crc kubenswrapper[4725]: I0120 11:48:44.932426 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:44 crc kubenswrapper[4725]: E0120 11:48:44.933599 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:58 crc kubenswrapper[4725]: I0120 11:48:58.933641 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:58 crc kubenswrapper[4725]: E0120 11:48:58.934819 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:49:10 crc kubenswrapper[4725]: I0120 11:49:10.938517 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:49:10 crc kubenswrapper[4725]: E0120 11:49:10.940674 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:49:23 crc kubenswrapper[4725]: I0120 11:49:23.932533 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:49:23 crc kubenswrapper[4725]: E0120 11:49:23.933726 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:49:38 crc kubenswrapper[4725]: I0120 11:49:38.932917 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:49:39 crc kubenswrapper[4725]: I0120 11:49:39.343748 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030"} Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.680722 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:49:44 crc kubenswrapper[4725]: E0120 11:49:44.682372 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.682396 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.682609 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.684191 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.706171 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.794533 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.794716 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.794754 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.896489 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.896575 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.896622 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.897675 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.897718 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.927697 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:45 crc kubenswrapper[4725]: I0120 11:49:45.006629 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:45 crc kubenswrapper[4725]: I0120 11:49:45.338704 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:49:45 crc kubenswrapper[4725]: I0120 11:49:45.396086 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerStarted","Data":"90a960928f25c322f34742bfaafed232a0042a646b8514ff0a1281c50bb598a7"} Jan 20 11:49:46 crc kubenswrapper[4725]: I0120 11:49:46.422072 4725 generic.go:334] "Generic (PLEG): container finished" podID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" exitCode=0 Jan 20 11:49:46 crc kubenswrapper[4725]: I0120 11:49:46.422402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b"} Jan 20 11:49:47 crc kubenswrapper[4725]: I0120 11:49:47.450403 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerStarted","Data":"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b"} Jan 20 11:49:49 crc kubenswrapper[4725]: I0120 11:49:49.469760 4725 generic.go:334] "Generic (PLEG): container finished" podID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" exitCode=0 Jan 20 11:49:49 crc kubenswrapper[4725]: I0120 11:49:49.470059 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b"} Jan 20 11:49:50 crc kubenswrapper[4725]: I0120 11:49:50.484428 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerStarted","Data":"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4"} Jan 20 11:49:50 crc kubenswrapper[4725]: I0120 11:49:50.508581 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvprb" podStartSLOduration=2.7538572500000003 podStartE2EDuration="6.508535065s" podCreationTimestamp="2026-01-20 11:49:44 +0000 UTC" firstStartedPulling="2026-01-20 11:49:46.427025922 +0000 UTC m=+2714.635347885" lastFinishedPulling="2026-01-20 11:49:50.181703727 +0000 UTC m=+2718.390025700" observedRunningTime="2026-01-20 11:49:50.504840209 +0000 UTC m=+2718.713162182" watchObservedRunningTime="2026-01-20 11:49:50.508535065 +0000 UTC m=+2718.716857028" Jan 20 11:49:55 crc kubenswrapper[4725]: I0120 11:49:55.007463 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:55 crc kubenswrapper[4725]: I0120 11:49:55.008356 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:56 crc kubenswrapper[4725]: I0120 11:49:56.061411 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvprb" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" probeResult="failure" output=< Jan 20 11:49:56 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:49:56 crc kubenswrapper[4725]: > Jan 20 11:50:05 crc kubenswrapper[4725]: I0120 11:50:05.053940 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:05 crc kubenswrapper[4725]: I0120 11:50:05.101391 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:05 crc kubenswrapper[4725]: I0120 11:50:05.303700 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:50:06 crc kubenswrapper[4725]: I0120 11:50:06.629556 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvprb" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" containerID="cri-o://25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" gracePeriod=2 Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.136728 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.259895 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.260188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.261064 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.262344 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities" (OuterVolumeSpecName: "utilities") pod "dce5b8ba-279b-46b4-a0df-e8b73a0cb582" (UID: "dce5b8ba-279b-46b4-a0df-e8b73a0cb582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.267748 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l" (OuterVolumeSpecName: "kube-api-access-g2n9l") pod "dce5b8ba-279b-46b4-a0df-e8b73a0cb582" (UID: "dce5b8ba-279b-46b4-a0df-e8b73a0cb582"). InnerVolumeSpecName "kube-api-access-g2n9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.363416 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") on node \"crc\" DevicePath \"\"" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.363463 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.385644 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dce5b8ba-279b-46b4-a0df-e8b73a0cb582" (UID: "dce5b8ba-279b-46b4-a0df-e8b73a0cb582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.465272 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648477 4725 generic.go:334] "Generic (PLEG): container finished" podID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" exitCode=0 Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648546 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4"} Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648566 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648593 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"90a960928f25c322f34742bfaafed232a0042a646b8514ff0a1281c50bb598a7"} Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648619 4725 scope.go:117] "RemoveContainer" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.690853 4725 scope.go:117] "RemoveContainer" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.696911 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.706925 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.726104 4725 scope.go:117] "RemoveContainer" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.747602 4725 scope.go:117] "RemoveContainer" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" Jan 20 11:50:08 crc kubenswrapper[4725]: E0120 11:50:08.748547 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4\": container with ID starting with 25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4 not found: ID does not exist" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.748722 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4"} err="failed to get container status \"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4\": rpc error: code = NotFound desc = could not find container \"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4\": container with ID starting with 25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4 not found: ID does not exist" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.748781 4725 scope.go:117] "RemoveContainer" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" Jan 20 11:50:08 crc kubenswrapper[4725]: E0120 11:50:08.749629 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b\": container with ID starting with c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b not found: ID does not exist" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.749743 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b"} err="failed to get container status \"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b\": rpc error: code = NotFound desc = could not find container \"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b\": container with ID starting with c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b not found: ID does not exist" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.749794 4725 scope.go:117] "RemoveContainer" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" Jan 20 11:50:08 crc kubenswrapper[4725]: E0120 11:50:08.750241 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b\": container with ID starting with 5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b not found: ID does not exist" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.750273 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b"} err="failed to get container status \"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b\": rpc error: code = NotFound desc = could not find container \"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b\": container with ID starting with 5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b not found: ID does not exist" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.942498 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" path="/var/lib/kubelet/pods/dce5b8ba-279b-46b4-a0df-e8b73a0cb582/volumes" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.685006 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:32 crc kubenswrapper[4725]: E0120 11:51:32.686460 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-content" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686485 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-content" Jan 20 11:51:32 crc kubenswrapper[4725]: E0120 11:51:32.686513 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-utilities" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686523 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-utilities" Jan 20 11:51:32 crc kubenswrapper[4725]: E0120 11:51:32.686547 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686558 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686752 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.688222 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.698335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.698886 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.698920 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.710190 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.800736 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801146 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801752 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801890 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.823893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.021285 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.360938 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.636051 4725 generic.go:334] "Generic (PLEG): container finished" podID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerID="439c27da629e8d548ff2341cd19df7c6cd9c5bb048de7df33d00c7d90b2ae60c" exitCode=0 Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.636139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"439c27da629e8d548ff2341cd19df7c6cd9c5bb048de7df33d00c7d90b2ae60c"} Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.636173 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerStarted","Data":"b8eb4165aba353118b0eaefaaf0a753011eb612f99ca1e758ca08b0d1b5df660"} Jan 20 11:51:34 crc kubenswrapper[4725]: I0120 11:51:34.647911 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerStarted","Data":"eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75"} Jan 20 11:51:35 crc kubenswrapper[4725]: I0120 11:51:35.659288 4725 generic.go:334] "Generic (PLEG): container finished" podID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerID="eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75" exitCode=0 Jan 20 11:51:35 crc kubenswrapper[4725]: I0120 11:51:35.659352 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75"} Jan 20 11:51:36 crc kubenswrapper[4725]: I0120 11:51:36.677110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerStarted","Data":"989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036"} Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.022701 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.023749 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.091817 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.112815 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zt2mx" podStartSLOduration=8.648207774 podStartE2EDuration="11.112786507s" podCreationTimestamp="2026-01-20 11:51:32 +0000 UTC" firstStartedPulling="2026-01-20 11:51:33.639198211 +0000 UTC m=+2821.847520184" lastFinishedPulling="2026-01-20 11:51:36.103776944 +0000 UTC m=+2824.312098917" observedRunningTime="2026-01-20 11:51:36.709132399 +0000 UTC m=+2824.917454382" watchObservedRunningTime="2026-01-20 11:51:43.112786507 +0000 UTC m=+2831.321108480" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.800124 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.856229 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:45 crc kubenswrapper[4725]: I0120 11:51:45.767696 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zt2mx" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" containerID="cri-o://989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036" gracePeriod=2 Jan 20 11:51:46 crc kubenswrapper[4725]: I0120 11:51:46.778263 4725 generic.go:334] "Generic (PLEG): container finished" podID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerID="989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036" exitCode=0 Jan 20 11:51:46 crc kubenswrapper[4725]: I0120 11:51:46.778339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036"} Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.322940 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.482838 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.482942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.483173 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.484309 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities" (OuterVolumeSpecName: "utilities") pod "61a768f0-365b-431a-88fc-22a3f6c9ec4b" (UID: "61a768f0-365b-431a-88fc-22a3f6c9ec4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.503974 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf" (OuterVolumeSpecName: "kube-api-access-hsfzf") pod "61a768f0-365b-431a-88fc-22a3f6c9ec4b" (UID: "61a768f0-365b-431a-88fc-22a3f6c9ec4b"). InnerVolumeSpecName "kube-api-access-hsfzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.562560 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61a768f0-365b-431a-88fc-22a3f6c9ec4b" (UID: "61a768f0-365b-431a-88fc-22a3f6c9ec4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.587506 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.587558 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") on node \"crc\" DevicePath \"\"" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.587569 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.791657 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"b8eb4165aba353118b0eaefaaf0a753011eb612f99ca1e758ca08b0d1b5df660"} Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.791759 4725 scope.go:117] "RemoveContainer" containerID="989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.791817 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.818574 4725 scope.go:117] "RemoveContainer" containerID="eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.841341 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.848117 4725 scope.go:117] "RemoveContainer" containerID="439c27da629e8d548ff2341cd19df7c6cd9c5bb048de7df33d00c7d90b2ae60c" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.848950 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:48 crc kubenswrapper[4725]: I0120 11:51:48.953986 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" path="/var/lib/kubelet/pods/61a768f0-365b-431a-88fc-22a3f6c9ec4b/volumes" Jan 20 11:51:56 crc kubenswrapper[4725]: I0120 11:51:56.728007 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:51:56 crc kubenswrapper[4725]: I0120 11:51:56.728988 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:52:26 crc kubenswrapper[4725]: I0120 11:52:26.727675 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:52:26 crc kubenswrapper[4725]: I0120 11:52:26.728779 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.728485 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.729558 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.729649 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.730820 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.730975 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030" gracePeriod=600 Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.505629 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030" exitCode=0 Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.505710 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030"} Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.506607 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a"} Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.506642 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.410618 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:00 crc kubenswrapper[4725]: E0120 11:53:00.411671 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411700 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" Jan 20 11:53:00 crc kubenswrapper[4725]: E0120 11:53:00.411727 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-utilities" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411736 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-utilities" Jan 20 11:53:00 crc kubenswrapper[4725]: E0120 11:53:00.411748 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-content" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411757 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-content" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411921 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.413453 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.423267 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.571292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.571375 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.571453 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.672657 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.672742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.672774 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.673519 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.673518 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.702260 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.733910 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.277683 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.549814 4725 generic.go:334] "Generic (PLEG): container finished" podID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" exitCode=0 Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.549926 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed"} Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.550009 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerStarted","Data":"389cf9eb3a31670eadbf0da4f7f3b31dee04694d0d2ded89763aaf5965f02fd2"} Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.552237 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:53:03 crc kubenswrapper[4725]: I0120 11:53:03.581813 4725 generic.go:334] "Generic (PLEG): container finished" podID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" exitCode=0 Jan 20 11:53:03 crc kubenswrapper[4725]: I0120 11:53:03.582226 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743"} Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.001503 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.003036 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.014623 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.127761 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"infrawatch-operators-9qrfz\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.229773 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"infrawatch-operators-9qrfz\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.253529 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"infrawatch-operators-9qrfz\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.323368 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.558638 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:05 crc kubenswrapper[4725]: W0120 11:53:05.569387 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2c46661_7c6f_442f_af6c_6c0d71674631.slice/crio-5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93 WatchSource:0}: Error finding container 5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93: Status 404 returned error can't find the container with id 5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93 Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.630979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerStarted","Data":"5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93"} Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.634711 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerStarted","Data":"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8"} Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.657066 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pkr8m" podStartSLOduration=2.780000518 podStartE2EDuration="5.657038507s" podCreationTimestamp="2026-01-20 11:53:00 +0000 UTC" firstStartedPulling="2026-01-20 11:53:01.551823547 +0000 UTC m=+2909.760145530" lastFinishedPulling="2026-01-20 11:53:04.428861536 +0000 UTC m=+2912.637183519" observedRunningTime="2026-01-20 11:53:05.65587089 +0000 UTC m=+2913.864192873" watchObservedRunningTime="2026-01-20 11:53:05.657038507 +0000 UTC m=+2913.865360470" Jan 20 11:53:06 crc kubenswrapper[4725]: I0120 11:53:06.647700 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerStarted","Data":"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364"} Jan 20 11:53:06 crc kubenswrapper[4725]: I0120 11:53:06.676466 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-9qrfz" podStartSLOduration=2.531157291 podStartE2EDuration="2.676436419s" podCreationTimestamp="2026-01-20 11:53:04 +0000 UTC" firstStartedPulling="2026-01-20 11:53:05.571880764 +0000 UTC m=+2913.780202737" lastFinishedPulling="2026-01-20 11:53:05.717159892 +0000 UTC m=+2913.925481865" observedRunningTime="2026-01-20 11:53:06.673161466 +0000 UTC m=+2914.881483439" watchObservedRunningTime="2026-01-20 11:53:06.676436419 +0000 UTC m=+2914.884758392" Jan 20 11:53:10 crc kubenswrapper[4725]: I0120 11:53:10.735008 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:10 crc kubenswrapper[4725]: I0120 11:53:10.736714 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:10 crc kubenswrapper[4725]: I0120 11:53:10.784897 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:11 crc kubenswrapper[4725]: I0120 11:53:11.750874 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:14 crc kubenswrapper[4725]: I0120 11:53:14.381483 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:14 crc kubenswrapper[4725]: I0120 11:53:14.733046 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pkr8m" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" containerID="cri-o://7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" gracePeriod=2 Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.323800 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.323885 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.360122 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.629806 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.735137 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.735266 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.735317 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.738589 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities" (OuterVolumeSpecName: "utilities") pod "07e24694-fcc0-41b2-9576-fd0c86d1dca3" (UID: "07e24694-fcc0-41b2-9576-fd0c86d1dca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.743451 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm" (OuterVolumeSpecName: "kube-api-access-zlvrm") pod "07e24694-fcc0-41b2-9576-fd0c86d1dca3" (UID: "07e24694-fcc0-41b2-9576-fd0c86d1dca3"). InnerVolumeSpecName "kube-api-access-zlvrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752041 4725 generic.go:334] "Generic (PLEG): container finished" podID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" exitCode=0 Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752094 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8"} Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"389cf9eb3a31670eadbf0da4f7f3b31dee04694d0d2ded89763aaf5965f02fd2"} Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752267 4725 scope.go:117] "RemoveContainer" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.787932 4725 scope.go:117] "RemoveContainer" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.790558 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.794927 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07e24694-fcc0-41b2-9576-fd0c86d1dca3" (UID: "07e24694-fcc0-41b2-9576-fd0c86d1dca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.818005 4725 scope.go:117] "RemoveContainer" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.836301 4725 scope.go:117] "RemoveContainer" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" Jan 20 11:53:15 crc kubenswrapper[4725]: E0120 11:53:15.836966 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8\": container with ID starting with 7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8 not found: ID does not exist" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837010 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8"} err="failed to get container status \"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8\": rpc error: code = NotFound desc = could not find container \"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8\": container with ID starting with 7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8 not found: ID does not exist" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837038 4725 scope.go:117] "RemoveContainer" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" Jan 20 11:53:15 crc kubenswrapper[4725]: E0120 11:53:15.837388 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743\": container with ID starting with 02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743 not found: ID does not exist" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837405 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837449 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837464 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837417 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743"} err="failed to get container status \"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743\": rpc error: code = NotFound desc = could not find container \"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743\": container with ID starting with 02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743 not found: ID does not exist" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837497 4725 scope.go:117] "RemoveContainer" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" Jan 20 11:53:15 crc kubenswrapper[4725]: E0120 11:53:15.839196 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed\": container with ID starting with da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed not found: ID does not exist" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.839250 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed"} err="failed to get container status \"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed\": rpc error: code = NotFound desc = could not find container \"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed\": container with ID starting with da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed not found: ID does not exist" Jan 20 11:53:16 crc kubenswrapper[4725]: I0120 11:53:16.140584 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:16 crc kubenswrapper[4725]: I0120 11:53:16.145837 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:16 crc kubenswrapper[4725]: I0120 11:53:16.941221 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" path="/var/lib/kubelet/pods/07e24694-fcc0-41b2-9576-fd0c86d1dca3/volumes" Jan 20 11:53:18 crc kubenswrapper[4725]: I0120 11:53:18.983447 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:18 crc kubenswrapper[4725]: I0120 11:53:18.983804 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-9qrfz" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" containerID="cri-o://d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" gracePeriod=2 Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.378907 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.405381 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"b2c46661-7c6f-442f-af6c-6c0d71674631\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.414591 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv" (OuterVolumeSpecName: "kube-api-access-lrjcv") pod "b2c46661-7c6f-442f-af6c-6c0d71674631" (UID: "b2c46661-7c6f-442f-af6c-6c0d71674631"). InnerVolumeSpecName "kube-api-access-lrjcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.508470 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793421 4725 generic.go:334] "Generic (PLEG): container finished" podID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" exitCode=0 Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793514 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793545 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerDied","Data":"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364"} Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793626 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerDied","Data":"5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93"} Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793656 4725 scope.go:117] "RemoveContainer" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.819043 4725 scope.go:117] "RemoveContainer" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" Jan 20 11:53:19 crc kubenswrapper[4725]: E0120 11:53:19.819822 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364\": container with ID starting with d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364 not found: ID does not exist" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.819904 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364"} err="failed to get container status \"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364\": rpc error: code = NotFound desc = could not find container \"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364\": container with ID starting with d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364 not found: ID does not exist" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.835404 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.845178 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:20 crc kubenswrapper[4725]: I0120 11:53:20.941175 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" path="/var/lib/kubelet/pods/b2c46661-7c6f-442f-af6c-6c0d71674631/volumes" Jan 20 11:55:26 crc kubenswrapper[4725]: I0120 11:55:26.728549 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:55:26 crc kubenswrapper[4725]: I0120 11:55:26.729761 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:55:56 crc kubenswrapper[4725]: I0120 11:55:56.727943 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:55:56 crc kubenswrapper[4725]: I0120 11:55:56.728688 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.727541 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.728032 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.728150 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.728998 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.729105 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" gracePeriod=600 Jan 20 11:56:26 crc kubenswrapper[4725]: E0120 11:56:26.865040 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.876624 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" exitCode=0 Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.876687 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a"} Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.876745 4725 scope.go:117] "RemoveContainer" containerID="b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.877885 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:56:26 crc kubenswrapper[4725]: E0120 11:56:26.878819 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:56:37 crc kubenswrapper[4725]: I0120 11:56:37.933551 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:56:37 crc kubenswrapper[4725]: E0120 11:56:37.934508 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:56:49 crc kubenswrapper[4725]: I0120 11:56:49.932659 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:56:49 crc kubenswrapper[4725]: E0120 11:56:49.933815 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:03 crc kubenswrapper[4725]: I0120 11:57:03.932920 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:03 crc kubenswrapper[4725]: E0120 11:57:03.934017 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:18 crc kubenswrapper[4725]: I0120 11:57:18.937517 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:18 crc kubenswrapper[4725]: E0120 11:57:18.940660 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:31 crc kubenswrapper[4725]: I0120 11:57:31.933292 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:31 crc kubenswrapper[4725]: E0120 11:57:31.934357 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:46 crc kubenswrapper[4725]: I0120 11:57:46.933295 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:46 crc kubenswrapper[4725]: E0120 11:57:46.934405 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:00 crc kubenswrapper[4725]: I0120 11:58:00.932865 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:00 crc kubenswrapper[4725]: E0120 11:58:00.934037 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:14 crc kubenswrapper[4725]: I0120 11:58:14.931908 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:14 crc kubenswrapper[4725]: E0120 11:58:14.933134 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:28 crc kubenswrapper[4725]: I0120 11:58:28.933203 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:28 crc kubenswrapper[4725]: E0120 11:58:28.934362 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.105350 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106427 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106447 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106476 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-content" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106485 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-content" Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106497 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-utilities" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106507 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-utilities" Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106531 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106538 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106769 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106792 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.107558 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.113222 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.286123 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"infrawatch-operators-szgzx\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.388194 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"infrawatch-operators-szgzx\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.413459 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"infrawatch-operators-szgzx\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.438131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.715350 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.726737 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:58:34 crc kubenswrapper[4725]: I0120 11:58:34.069067 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerStarted","Data":"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0"} Jan 20 11:58:34 crc kubenswrapper[4725]: I0120 11:58:34.069190 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerStarted","Data":"10f8f1370bf7c297a0eefd23ad5a3a876be0d75b5983ab22ccd39fceb71cea67"} Jan 20 11:58:34 crc kubenswrapper[4725]: I0120 11:58:34.106124 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-szgzx" podStartSLOduration=0.985034207 podStartE2EDuration="1.106090985s" podCreationTimestamp="2026-01-20 11:58:33 +0000 UTC" firstStartedPulling="2026-01-20 11:58:33.726318327 +0000 UTC m=+3241.934640300" lastFinishedPulling="2026-01-20 11:58:33.847375105 +0000 UTC m=+3242.055697078" observedRunningTime="2026-01-20 11:58:34.090580286 +0000 UTC m=+3242.298902259" watchObservedRunningTime="2026-01-20 11:58:34.106090985 +0000 UTC m=+3242.314412958" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.438665 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.440245 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.473161 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.932574 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:43 crc kubenswrapper[4725]: E0120 11:58:43.932895 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:44 crc kubenswrapper[4725]: I0120 11:58:44.245715 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:45 crc kubenswrapper[4725]: I0120 11:58:45.874996 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.237656 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-szgzx" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" containerID="cri-o://0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" gracePeriod=2 Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.616972 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.776474 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"ff83a417-3909-4bf5-9300-40129abe7ad3\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.785226 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl" (OuterVolumeSpecName: "kube-api-access-pjvwl") pod "ff83a417-3909-4bf5-9300-40129abe7ad3" (UID: "ff83a417-3909-4bf5-9300-40129abe7ad3"). InnerVolumeSpecName "kube-api-access-pjvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.879442 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") on node \"crc\" DevicePath \"\"" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.249207 4725 generic.go:334] "Generic (PLEG): container finished" podID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" exitCode=0 Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.249453 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerDied","Data":"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0"} Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.251046 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerDied","Data":"10f8f1370bf7c297a0eefd23ad5a3a876be0d75b5983ab22ccd39fceb71cea67"} Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.249554 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.251227 4725 scope.go:117] "RemoveContainer" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.279821 4725 scope.go:117] "RemoveContainer" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" Jan 20 11:58:48 crc kubenswrapper[4725]: E0120 11:58:48.281500 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0\": container with ID starting with 0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0 not found: ID does not exist" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.281548 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0"} err="failed to get container status \"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0\": rpc error: code = NotFound desc = could not find container \"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0\": container with ID starting with 0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0 not found: ID does not exist" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.306558 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.314286 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.950281 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" path="/var/lib/kubelet/pods/ff83a417-3909-4bf5-9300-40129abe7ad3/volumes" Jan 20 11:58:56 crc kubenswrapper[4725]: I0120 11:58:56.937739 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:56 crc kubenswrapper[4725]: E0120 11:58:56.939060 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:09 crc kubenswrapper[4725]: I0120 11:59:09.932050 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:09 crc kubenswrapper[4725]: E0120 11:59:09.933254 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:24 crc kubenswrapper[4725]: I0120 11:59:24.932957 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:24 crc kubenswrapper[4725]: E0120 11:59:24.934129 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:37 crc kubenswrapper[4725]: I0120 11:59:37.932289 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:37 crc kubenswrapper[4725]: E0120 11:59:37.933358 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:49 crc kubenswrapper[4725]: I0120 11:59:49.932593 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:49 crc kubenswrapper[4725]: E0120 11:59:49.935125 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.361923 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 11:59:57 crc kubenswrapper[4725]: E0120 11:59:57.363418 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.363441 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.363642 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.365105 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.376820 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.467046 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.467136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.467158 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.569687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.569758 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.569917 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.570451 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.570744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.597055 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.698829 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.984764 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.998556 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerStarted","Data":"392e1ebcc5bf26919da577c07af527e8b8dccf334936990b1cd4a156ea61f191"} Jan 20 11:59:59 crc kubenswrapper[4725]: I0120 11:59:59.009726 4725 generic.go:334] "Generic (PLEG): container finished" podID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" exitCode=0 Jan 20 11:59:59 crc kubenswrapper[4725]: I0120 11:59:59.009850 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383"} Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.028974 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerStarted","Data":"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045"} Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.149347 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942"] Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.151396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.160440 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.160440 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.175097 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942"] Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.268057 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.268135 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.268587 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.370476 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.370617 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.370642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.376476 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.389895 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.397219 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.490434 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.950321 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942"] Jan 20 12:00:01 crc kubenswrapper[4725]: I0120 12:00:01.040323 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" event={"ID":"2554af70-a48f-4921-a6a6-407016260425","Type":"ContainerStarted","Data":"2c0425d87fc1b48fbc261bcffbbc7b2f08b74a79c6a7a3781b51817e41fde95d"} Jan 20 12:00:01 crc kubenswrapper[4725]: I0120 12:00:01.933226 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:01 crc kubenswrapper[4725]: E0120 12:00:01.934151 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.051472 4725 generic.go:334] "Generic (PLEG): container finished" podID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" exitCode=0 Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.051584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045"} Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.057432 4725 generic.go:334] "Generic (PLEG): container finished" podID="2554af70-a48f-4921-a6a6-407016260425" containerID="10ce9079465756a929b5da70283bfeabe7bc38f9a8f2768b4b30865ed5b9c3cd" exitCode=0 Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.057508 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" event={"ID":"2554af70-a48f-4921-a6a6-407016260425","Type":"ContainerDied","Data":"10ce9079465756a929b5da70283bfeabe7bc38f9a8f2768b4b30865ed5b9c3cd"} Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.349474 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.527817 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"2554af70-a48f-4921-a6a6-407016260425\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.527912 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"2554af70-a48f-4921-a6a6-407016260425\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.527966 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"2554af70-a48f-4921-a6a6-407016260425\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.528883 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume" (OuterVolumeSpecName: "config-volume") pod "2554af70-a48f-4921-a6a6-407016260425" (UID: "2554af70-a48f-4921-a6a6-407016260425"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.534567 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2554af70-a48f-4921-a6a6-407016260425" (UID: "2554af70-a48f-4921-a6a6-407016260425"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.535245 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5" (OuterVolumeSpecName: "kube-api-access-8mbm5") pod "2554af70-a48f-4921-a6a6-407016260425" (UID: "2554af70-a48f-4921-a6a6-407016260425"). InnerVolumeSpecName "kube-api-access-8mbm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.630462 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.630548 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.630561 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.079461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerStarted","Data":"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3"} Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.083362 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" event={"ID":"2554af70-a48f-4921-a6a6-407016260425","Type":"ContainerDied","Data":"2c0425d87fc1b48fbc261bcffbbc7b2f08b74a79c6a7a3781b51817e41fde95d"} Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.083398 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0425d87fc1b48fbc261bcffbbc7b2f08b74a79c6a7a3781b51817e41fde95d" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.083457 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.117502 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c5ntc" podStartSLOduration=2.716132135 podStartE2EDuration="7.11731471s" podCreationTimestamp="2026-01-20 11:59:57 +0000 UTC" firstStartedPulling="2026-01-20 11:59:59.012231974 +0000 UTC m=+3327.220553947" lastFinishedPulling="2026-01-20 12:00:03.413414549 +0000 UTC m=+3331.621736522" observedRunningTime="2026-01-20 12:00:04.109758952 +0000 UTC m=+3332.318080935" watchObservedRunningTime="2026-01-20 12:00:04.11731471 +0000 UTC m=+3332.325636683" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.435619 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.443335 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.944308 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" path="/var/lib/kubelet/pods/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa/volumes" Jan 20 12:00:07 crc kubenswrapper[4725]: I0120 12:00:07.699941 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:07 crc kubenswrapper[4725]: I0120 12:00:07.700549 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:08 crc kubenswrapper[4725]: I0120 12:00:08.758022 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c5ntc" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" probeResult="failure" output=< Jan 20 12:00:08 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 12:00:08 crc kubenswrapper[4725]: > Jan 20 12:00:14 crc kubenswrapper[4725]: I0120 12:00:14.933633 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:14 crc kubenswrapper[4725]: E0120 12:00:14.934943 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:17 crc kubenswrapper[4725]: I0120 12:00:17.501218 4725 scope.go:117] "RemoveContainer" containerID="df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61" Jan 20 12:00:17 crc kubenswrapper[4725]: I0120 12:00:17.752448 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:17 crc kubenswrapper[4725]: I0120 12:00:17.806885 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:18 crc kubenswrapper[4725]: I0120 12:00:18.002356 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.211983 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c5ntc" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" containerID="cri-o://7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" gracePeriod=2 Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.636940 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.829796 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.830120 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.830192 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.831251 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities" (OuterVolumeSpecName: "utilities") pod "70c7db0b-067f-4c18-85c3-2a7cafffd47f" (UID: "70c7db0b-067f-4c18-85c3-2a7cafffd47f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.837781 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48" (OuterVolumeSpecName: "kube-api-access-g6w48") pod "70c7db0b-067f-4c18-85c3-2a7cafffd47f" (UID: "70c7db0b-067f-4c18-85c3-2a7cafffd47f"). InnerVolumeSpecName "kube-api-access-g6w48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.932426 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.932479 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.993227 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70c7db0b-067f-4c18-85c3-2a7cafffd47f" (UID: "70c7db0b-067f-4c18-85c3-2a7cafffd47f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.034742 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.249997 4725 generic.go:334] "Generic (PLEG): container finished" podID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" exitCode=0 Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250066 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3"} Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250141 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"392e1ebcc5bf26919da577c07af527e8b8dccf334936990b1cd4a156ea61f191"} Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250169 4725 scope.go:117] "RemoveContainer" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250192 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.335599 4725 scope.go:117] "RemoveContainer" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.345131 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.354164 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.386501 4725 scope.go:117] "RemoveContainer" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.388922 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c7db0b_067f_4c18_85c3_2a7cafffd47f.slice/crio-392e1ebcc5bf26919da577c07af527e8b8dccf334936990b1cd4a156ea61f191\": RecentStats: unable to find data in memory cache]" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.405646 4725 scope.go:117] "RemoveContainer" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.406161 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3\": container with ID starting with 7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3 not found: ID does not exist" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406209 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3"} err="failed to get container status \"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3\": rpc error: code = NotFound desc = could not find container \"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3\": container with ID starting with 7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3 not found: ID does not exist" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406238 4725 scope.go:117] "RemoveContainer" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.406944 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045\": container with ID starting with e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045 not found: ID does not exist" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406979 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045"} err="failed to get container status \"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045\": rpc error: code = NotFound desc = could not find container \"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045\": container with ID starting with e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045 not found: ID does not exist" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406995 4725 scope.go:117] "RemoveContainer" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.407299 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383\": container with ID starting with 1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383 not found: ID does not exist" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.407326 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383"} err="failed to get container status \"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383\": rpc error: code = NotFound desc = could not find container \"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383\": container with ID starting with 1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383 not found: ID does not exist" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.942133 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" path="/var/lib/kubelet/pods/70c7db0b-067f-4c18-85c3-2a7cafffd47f/volumes" Jan 20 12:00:28 crc kubenswrapper[4725]: I0120 12:00:28.932686 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:28 crc kubenswrapper[4725]: E0120 12:00:28.933793 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:39 crc kubenswrapper[4725]: I0120 12:00:39.934013 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:39 crc kubenswrapper[4725]: E0120 12:00:39.936941 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:51 crc kubenswrapper[4725]: I0120 12:00:51.933245 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:51 crc kubenswrapper[4725]: E0120 12:00:51.934335 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:01:03 crc kubenswrapper[4725]: I0120 12:01:03.932898 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:01:03 crc kubenswrapper[4725]: E0120 12:01:03.934614 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:01:16 crc kubenswrapper[4725]: I0120 12:01:16.932414 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:01:16 crc kubenswrapper[4725]: E0120 12:01:16.933369 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:01:27 crc kubenswrapper[4725]: I0120 12:01:27.932683 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:01:29 crc kubenswrapper[4725]: I0120 12:01:29.069638 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831"} Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.463758 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.466560 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-content" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.466735 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-content" Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.466929 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-utilities" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467027 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-utilities" Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.467150 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2554af70-a48f-4921-a6a6-407016260425" containerName="collect-profiles" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467277 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2554af70-a48f-4921-a6a6-407016260425" containerName="collect-profiles" Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.467380 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467461 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467824 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467972 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2554af70-a48f-4921-a6a6-407016260425" containerName="collect-profiles" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.469515 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.476222 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.666246 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.666422 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.666488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.768618 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.768746 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.768821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.769293 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.769666 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.793189 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.800351 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:38 crc kubenswrapper[4725]: I0120 12:01:38.489592 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:39 crc kubenswrapper[4725]: I0120 12:01:39.169747 4725 generic.go:334] "Generic (PLEG): container finished" podID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" exitCode=0 Jan 20 12:01:39 crc kubenswrapper[4725]: I0120 12:01:39.177706 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27"} Jan 20 12:01:39 crc kubenswrapper[4725]: I0120 12:01:39.177858 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerStarted","Data":"913cd3e9113ce71406315493bcb40cc6004d717b2eb1135025136e0800cb3fd7"} Jan 20 12:01:41 crc kubenswrapper[4725]: I0120 12:01:41.191964 4725 generic.go:334] "Generic (PLEG): container finished" podID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" exitCode=0 Jan 20 12:01:41 crc kubenswrapper[4725]: I0120 12:01:41.192032 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46"} Jan 20 12:01:42 crc kubenswrapper[4725]: I0120 12:01:42.204768 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerStarted","Data":"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424"} Jan 20 12:01:42 crc kubenswrapper[4725]: I0120 12:01:42.233993 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p4f8v" podStartSLOduration=2.766273622 podStartE2EDuration="5.233966144s" podCreationTimestamp="2026-01-20 12:01:37 +0000 UTC" firstStartedPulling="2026-01-20 12:01:39.172277398 +0000 UTC m=+3427.380599371" lastFinishedPulling="2026-01-20 12:01:41.63996992 +0000 UTC m=+3429.848291893" observedRunningTime="2026-01-20 12:01:42.233625255 +0000 UTC m=+3430.441947228" watchObservedRunningTime="2026-01-20 12:01:42.233966144 +0000 UTC m=+3430.442288117" Jan 20 12:01:47 crc kubenswrapper[4725]: I0120 12:01:47.801616 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:47 crc kubenswrapper[4725]: I0120 12:01:47.804520 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:47 crc kubenswrapper[4725]: I0120 12:01:47.848393 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:48 crc kubenswrapper[4725]: I0120 12:01:48.321147 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:48 crc kubenswrapper[4725]: I0120 12:01:48.377978 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.291541 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p4f8v" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" containerID="cri-o://55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" gracePeriod=2 Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.731598 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.796063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.796198 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.796264 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.808270 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities" (OuterVolumeSpecName: "utilities") pod "5dcba88a-7550-4cc6-965c-43ca26a8ac63" (UID: "5dcba88a-7550-4cc6-965c-43ca26a8ac63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.813736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk" (OuterVolumeSpecName: "kube-api-access-sdmvk") pod "5dcba88a-7550-4cc6-965c-43ca26a8ac63" (UID: "5dcba88a-7550-4cc6-965c-43ca26a8ac63"). InnerVolumeSpecName "kube-api-access-sdmvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.852608 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dcba88a-7550-4cc6-965c-43ca26a8ac63" (UID: "5dcba88a-7550-4cc6-965c-43ca26a8ac63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.898570 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.898638 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") on node \"crc\" DevicePath \"\"" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.898655 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306117 4725 generic.go:334] "Generic (PLEG): container finished" podID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" exitCode=0 Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306202 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424"} Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306256 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"913cd3e9113ce71406315493bcb40cc6004d717b2eb1135025136e0800cb3fd7"} Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306248 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306280 4725 scope.go:117] "RemoveContainer" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.334966 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.337055 4725 scope.go:117] "RemoveContainer" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.341538 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.357255 4725 scope.go:117] "RemoveContainer" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.381800 4725 scope.go:117] "RemoveContainer" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" Jan 20 12:01:51 crc kubenswrapper[4725]: E0120 12:01:51.382707 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424\": container with ID starting with 55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424 not found: ID does not exist" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.382778 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424"} err="failed to get container status \"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424\": rpc error: code = NotFound desc = could not find container \"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424\": container with ID starting with 55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424 not found: ID does not exist" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.382827 4725 scope.go:117] "RemoveContainer" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" Jan 20 12:01:51 crc kubenswrapper[4725]: E0120 12:01:51.383579 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46\": container with ID starting with 3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46 not found: ID does not exist" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.383639 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46"} err="failed to get container status \"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46\": rpc error: code = NotFound desc = could not find container \"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46\": container with ID starting with 3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46 not found: ID does not exist" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.383675 4725 scope.go:117] "RemoveContainer" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" Jan 20 12:01:51 crc kubenswrapper[4725]: E0120 12:01:51.384131 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27\": container with ID starting with 6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27 not found: ID does not exist" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.384163 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27"} err="failed to get container status \"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27\": rpc error: code = NotFound desc = could not find container \"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27\": container with ID starting with 6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27 not found: ID does not exist" Jan 20 12:01:52 crc kubenswrapper[4725]: I0120 12:01:52.944909 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" path="/var/lib/kubelet/pods/5dcba88a-7550-4cc6-965c-43ca26a8ac63/volumes" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.592600 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:46 crc kubenswrapper[4725]: E0120 12:03:46.594539 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-content" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.594593 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-content" Jan 20 12:03:46 crc kubenswrapper[4725]: E0120 12:03:46.594671 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.594892 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" Jan 20 12:03:46 crc kubenswrapper[4725]: E0120 12:03:46.594905 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-utilities" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.594915 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-utilities" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.595186 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.597221 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.618775 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.874146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.874257 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.874317 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.976283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.976815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.977480 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.977643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.978543 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.999718 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.231003 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.577519 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.789831 4725 generic.go:334] "Generic (PLEG): container finished" podID="129a7977-fd61-4742-94da-f07dcd889975" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" exitCode=0 Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.790293 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85"} Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.790340 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerStarted","Data":"e28e988c48cd24ed974de439a1c86425c5586d960673f8d53fc5d5bf8c75d826"} Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.792339 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:03:48 crc kubenswrapper[4725]: I0120 12:03:48.803515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerStarted","Data":"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49"} Jan 20 12:03:49 crc kubenswrapper[4725]: I0120 12:03:49.819508 4725 generic.go:334] "Generic (PLEG): container finished" podID="129a7977-fd61-4742-94da-f07dcd889975" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" exitCode=0 Jan 20 12:03:49 crc kubenswrapper[4725]: I0120 12:03:49.819624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49"} Jan 20 12:03:50 crc kubenswrapper[4725]: I0120 12:03:50.831676 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerStarted","Data":"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64"} Jan 20 12:03:50 crc kubenswrapper[4725]: I0120 12:03:50.856614 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-blgnl" podStartSLOduration=2.230689489 podStartE2EDuration="4.856580732s" podCreationTimestamp="2026-01-20 12:03:46 +0000 UTC" firstStartedPulling="2026-01-20 12:03:47.791934801 +0000 UTC m=+3556.000256784" lastFinishedPulling="2026-01-20 12:03:50.417826054 +0000 UTC m=+3558.626148027" observedRunningTime="2026-01-20 12:03:50.856571732 +0000 UTC m=+3559.064893705" watchObservedRunningTime="2026-01-20 12:03:50.856580732 +0000 UTC m=+3559.064902705" Jan 20 12:03:56 crc kubenswrapper[4725]: I0120 12:03:56.728385 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:03:56 crc kubenswrapper[4725]: I0120 12:03:56.729373 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.236100 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.236154 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.291073 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.951915 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:58 crc kubenswrapper[4725]: I0120 12:03:58.006474 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.922948 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-blgnl" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" containerID="cri-o://c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" gracePeriod=2 Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.958838 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.960939 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.971061 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.008532 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"infrawatch-operators-rj8v4\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.109718 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"infrawatch-operators-rj8v4\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.132265 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"infrawatch-operators-rj8v4\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.288174 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.744773 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.792731 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.823116 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"129a7977-fd61-4742-94da-f07dcd889975\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.823293 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"129a7977-fd61-4742-94da-f07dcd889975\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.823357 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"129a7977-fd61-4742-94da-f07dcd889975\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.824530 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities" (OuterVolumeSpecName: "utilities") pod "129a7977-fd61-4742-94da-f07dcd889975" (UID: "129a7977-fd61-4742-94da-f07dcd889975"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.831170 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692" (OuterVolumeSpecName: "kube-api-access-8z692") pod "129a7977-fd61-4742-94da-f07dcd889975" (UID: "129a7977-fd61-4742-94da-f07dcd889975"). InnerVolumeSpecName "kube-api-access-8z692". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.894820 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "129a7977-fd61-4742-94da-f07dcd889975" (UID: "129a7977-fd61-4742-94da-f07dcd889975"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.925581 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.925631 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.925646 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.942726 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerStarted","Data":"151333e65b36b6a12c5b665a2385edff5beb5e239603dbf24e1de973de8464a5"} Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943558 4725 generic.go:334] "Generic (PLEG): container finished" podID="129a7977-fd61-4742-94da-f07dcd889975" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" exitCode=0 Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943614 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64"} Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943634 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943659 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"e28e988c48cd24ed974de439a1c86425c5586d960673f8d53fc5d5bf8c75d826"} Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943711 4725 scope.go:117] "RemoveContainer" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.974567 4725 scope.go:117] "RemoveContainer" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.984462 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.992060 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.006322 4725 scope.go:117] "RemoveContainer" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.027222 4725 scope.go:117] "RemoveContainer" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" Jan 20 12:04:01 crc kubenswrapper[4725]: E0120 12:04:01.029230 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64\": container with ID starting with c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64 not found: ID does not exist" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.029299 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64"} err="failed to get container status \"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64\": rpc error: code = NotFound desc = could not find container \"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64\": container with ID starting with c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64 not found: ID does not exist" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.029341 4725 scope.go:117] "RemoveContainer" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" Jan 20 12:04:01 crc kubenswrapper[4725]: E0120 12:04:01.029968 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49\": container with ID starting with 584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49 not found: ID does not exist" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.030007 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49"} err="failed to get container status \"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49\": rpc error: code = NotFound desc = could not find container \"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49\": container with ID starting with 584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49 not found: ID does not exist" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.030038 4725 scope.go:117] "RemoveContainer" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" Jan 20 12:04:01 crc kubenswrapper[4725]: E0120 12:04:01.030536 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85\": container with ID starting with 7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85 not found: ID does not exist" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.030616 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85"} err="failed to get container status \"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85\": rpc error: code = NotFound desc = could not find container \"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85\": container with ID starting with 7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85 not found: ID does not exist" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.952602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerStarted","Data":"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291"} Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.976150 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-rj8v4" podStartSLOduration=2.831511676 podStartE2EDuration="2.976124867s" podCreationTimestamp="2026-01-20 12:03:59 +0000 UTC" firstStartedPulling="2026-01-20 12:04:00.750199171 +0000 UTC m=+3568.958521144" lastFinishedPulling="2026-01-20 12:04:00.894812362 +0000 UTC m=+3569.103134335" observedRunningTime="2026-01-20 12:04:01.969903281 +0000 UTC m=+3570.178225254" watchObservedRunningTime="2026-01-20 12:04:01.976124867 +0000 UTC m=+3570.184446840" Jan 20 12:04:02 crc kubenswrapper[4725]: I0120 12:04:02.951670 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="129a7977-fd61-4742-94da-f07dcd889975" path="/var/lib/kubelet/pods/129a7977-fd61-4742-94da-f07dcd889975/volumes" Jan 20 12:04:10 crc kubenswrapper[4725]: I0120 12:04:10.289317 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:10 crc kubenswrapper[4725]: I0120 12:04:10.290145 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:10 crc kubenswrapper[4725]: I0120 12:04:10.330764 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:11 crc kubenswrapper[4725]: I0120 12:04:11.076194 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:11 crc kubenswrapper[4725]: I0120 12:04:11.532813 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.042137 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-rj8v4" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" containerID="cri-o://2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" gracePeriod=2 Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.447435 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.579747 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"3ea261e4-31a5-47f1-b7da-585da56b41fd\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.588949 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt" (OuterVolumeSpecName: "kube-api-access-ns6mt") pod "3ea261e4-31a5-47f1-b7da-585da56b41fd" (UID: "3ea261e4-31a5-47f1-b7da-585da56b41fd"). InnerVolumeSpecName "kube-api-access-ns6mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.681600 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.053787 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" exitCode=0 Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.053906 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerDied","Data":"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291"} Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.053977 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerDied","Data":"151333e65b36b6a12c5b665a2385edff5beb5e239603dbf24e1de973de8464a5"} Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.054008 4725 scope.go:117] "RemoveContainer" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.054308 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.079793 4725 scope.go:117] "RemoveContainer" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" Jan 20 12:04:14 crc kubenswrapper[4725]: E0120 12:04:14.080586 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291\": container with ID starting with 2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291 not found: ID does not exist" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.080662 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291"} err="failed to get container status \"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291\": rpc error: code = NotFound desc = could not find container \"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291\": container with ID starting with 2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291 not found: ID does not exist" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.103277 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.109626 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.942238 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" path="/var/lib/kubelet/pods/3ea261e4-31a5-47f1-b7da-585da56b41fd/volumes" Jan 20 12:04:26 crc kubenswrapper[4725]: I0120 12:04:26.727745 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:04:26 crc kubenswrapper[4725]: I0120 12:04:26.728693 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.728488 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.729420 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.729496 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.730458 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.730535 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831" gracePeriod=600 Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.487435 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831" exitCode=0 Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.487531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831"} Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.488253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b"} Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.488316 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:07:26 crc kubenswrapper[4725]: I0120 12:07:26.728384 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:07:26 crc kubenswrapper[4725]: I0120 12:07:26.731953 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:07:56 crc kubenswrapper[4725]: I0120 12:07:56.727858 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:07:56 crc kubenswrapper[4725]: I0120 12:07:56.728529 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.728753 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.729549 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.729665 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.731182 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.731283 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" gracePeriod=600 Jan 20 12:08:26 crc kubenswrapper[4725]: E0120 12:08:26.858721 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.571762 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" exitCode=0 Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.571817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b"} Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.572285 4725 scope.go:117] "RemoveContainer" containerID="3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831" Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.573214 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:08:27 crc kubenswrapper[4725]: E0120 12:08:27.575931 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:08:39 crc kubenswrapper[4725]: I0120 12:08:39.932384 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:08:39 crc kubenswrapper[4725]: E0120 12:08:39.933557 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:08:50 crc kubenswrapper[4725]: I0120 12:08:50.933580 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:08:50 crc kubenswrapper[4725]: E0120 12:08:50.934755 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.425182 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426680 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-utilities" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426708 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-utilities" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426755 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426767 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426789 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426803 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426827 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-content" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426838 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-content" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.427121 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.427154 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.428329 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.437322 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.546947 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"infrawatch-operators-2ttwk\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.650570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"infrawatch-operators-2ttwk\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.673415 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"infrawatch-operators-2ttwk\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.751355 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.932360 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.933175 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.067653 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.083206 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.983292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerStarted","Data":"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee"} Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.983369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerStarted","Data":"fd4ba608abad54884b78044eb3da74fc1b2260422eff0a852105597c3a216ab8"} Jan 20 12:09:07 crc kubenswrapper[4725]: I0120 12:09:07.009795 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-2ttwk" podStartSLOduration=1.867875263 podStartE2EDuration="2.009733537s" podCreationTimestamp="2026-01-20 12:09:05 +0000 UTC" firstStartedPulling="2026-01-20 12:09:06.082865663 +0000 UTC m=+3874.291187636" lastFinishedPulling="2026-01-20 12:09:06.224723937 +0000 UTC m=+3874.433045910" observedRunningTime="2026-01-20 12:09:07.00348409 +0000 UTC m=+3875.211806073" watchObservedRunningTime="2026-01-20 12:09:07.009733537 +0000 UTC m=+3875.218055520" Jan 20 12:09:15 crc kubenswrapper[4725]: I0120 12:09:15.751942 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:15 crc kubenswrapper[4725]: I0120 12:09:15.754567 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:15 crc kubenswrapper[4725]: I0120 12:09:15.797336 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:16 crc kubenswrapper[4725]: I0120 12:09:16.118558 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:16 crc kubenswrapper[4725]: I0120 12:09:16.175717 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.094692 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-2ttwk" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" containerID="cri-o://df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" gracePeriod=2 Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.782245 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.964432 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"c04dbdce-d40b-4ab6-a770-29307869c23c\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.976208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8" (OuterVolumeSpecName: "kube-api-access-x98n8") pod "c04dbdce-d40b-4ab6-a770-29307869c23c" (UID: "c04dbdce-d40b-4ab6-a770-29307869c23c"). InnerVolumeSpecName "kube-api-access-x98n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.066234 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") on node \"crc\" DevicePath \"\"" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106269 4725 generic.go:334] "Generic (PLEG): container finished" podID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" exitCode=0 Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerDied","Data":"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee"} Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106349 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106388 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerDied","Data":"fd4ba608abad54884b78044eb3da74fc1b2260422eff0a852105597c3a216ab8"} Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106414 4725 scope.go:117] "RemoveContainer" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.144453 4725 scope.go:117] "RemoveContainer" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" Jan 20 12:09:19 crc kubenswrapper[4725]: E0120 12:09:19.145093 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee\": container with ID starting with df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee not found: ID does not exist" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.145136 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee"} err="failed to get container status \"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee\": rpc error: code = NotFound desc = could not find container \"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee\": container with ID starting with df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee not found: ID does not exist" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.158924 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.169272 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.932750 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:19 crc kubenswrapper[4725]: E0120 12:09:19.933151 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:20 crc kubenswrapper[4725]: I0120 12:09:20.954938 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" path="/var/lib/kubelet/pods/c04dbdce-d40b-4ab6-a770-29307869c23c/volumes" Jan 20 12:09:32 crc kubenswrapper[4725]: I0120 12:09:32.938553 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:32 crc kubenswrapper[4725]: E0120 12:09:32.941769 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:44 crc kubenswrapper[4725]: I0120 12:09:44.932723 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:44 crc kubenswrapper[4725]: E0120 12:09:44.933943 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:58 crc kubenswrapper[4725]: I0120 12:09:58.932569 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:58 crc kubenswrapper[4725]: E0120 12:09:58.933649 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.383687 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:04 crc kubenswrapper[4725]: E0120 12:10:04.385179 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.385206 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.385477 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.387438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.629464 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.731540 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.731603 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.731644 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.835283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.835748 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.835796 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.836285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.836681 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.863131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.953925 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.222891 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.654192 4725 generic.go:334] "Generic (PLEG): container finished" podID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerID="029968eb55c812507bab83444b4f5735976f6601188235dc475e69bc38de138d" exitCode=0 Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.654248 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"029968eb55c812507bab83444b4f5735976f6601188235dc475e69bc38de138d"} Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.654278 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerStarted","Data":"fff4b06d60c6f765cf9c65a26e4e50c9347c82f2284687cae5f1eaf97ae21b3a"} Jan 20 12:10:06 crc kubenswrapper[4725]: I0120 12:10:06.668732 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerStarted","Data":"6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4"} Jan 20 12:10:08 crc kubenswrapper[4725]: I0120 12:10:08.692485 4725 generic.go:334] "Generic (PLEG): container finished" podID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerID="6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4" exitCode=0 Jan 20 12:10:08 crc kubenswrapper[4725]: I0120 12:10:08.692573 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4"} Jan 20 12:10:09 crc kubenswrapper[4725]: I0120 12:10:09.706060 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerStarted","Data":"735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605"} Jan 20 12:10:13 crc kubenswrapper[4725]: I0120 12:10:13.932717 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:13 crc kubenswrapper[4725]: E0120 12:10:13.933660 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:14 crc kubenswrapper[4725]: I0120 12:10:14.954490 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:14 crc kubenswrapper[4725]: I0120 12:10:14.955726 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:16 crc kubenswrapper[4725]: I0120 12:10:16.028237 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtnxl" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" probeResult="failure" output=< Jan 20 12:10:16 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 12:10:16 crc kubenswrapper[4725]: > Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.006135 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.044159 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jtnxl" podStartSLOduration=17.396799168 podStartE2EDuration="21.044129307s" podCreationTimestamp="2026-01-20 12:10:04 +0000 UTC" firstStartedPulling="2026-01-20 12:10:05.656212972 +0000 UTC m=+3933.864534945" lastFinishedPulling="2026-01-20 12:10:09.303543111 +0000 UTC m=+3937.511865084" observedRunningTime="2026-01-20 12:10:09.738424748 +0000 UTC m=+3937.946746721" watchObservedRunningTime="2026-01-20 12:10:25.044129307 +0000 UTC m=+3953.252451290" Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.066956 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.262581 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:26 crc kubenswrapper[4725]: I0120 12:10:26.362706 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jtnxl" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" containerID="cri-o://735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605" gracePeriod=2 Jan 20 12:10:27 crc kubenswrapper[4725]: I0120 12:10:27.375993 4725 generic.go:334] "Generic (PLEG): container finished" podID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerID="735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605" exitCode=0 Jan 20 12:10:27 crc kubenswrapper[4725]: I0120 12:10:27.376193 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605"} Jan 20 12:10:27 crc kubenswrapper[4725]: I0120 12:10:27.965717 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.150026 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"149c1b55-088a-4bd8-beaf-ca554aefa16c\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.150128 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"149c1b55-088a-4bd8-beaf-ca554aefa16c\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.150372 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"149c1b55-088a-4bd8-beaf-ca554aefa16c\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.151799 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities" (OuterVolumeSpecName: "utilities") pod "149c1b55-088a-4bd8-beaf-ca554aefa16c" (UID: "149c1b55-088a-4bd8-beaf-ca554aefa16c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.174239 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7" (OuterVolumeSpecName: "kube-api-access-qrqc7") pod "149c1b55-088a-4bd8-beaf-ca554aefa16c" (UID: "149c1b55-088a-4bd8-beaf-ca554aefa16c"). InnerVolumeSpecName "kube-api-access-qrqc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.252316 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.252368 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") on node \"crc\" DevicePath \"\"" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.280699 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149c1b55-088a-4bd8-beaf-ca554aefa16c" (UID: "149c1b55-088a-4bd8-beaf-ca554aefa16c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.354222 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.385904 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"fff4b06d60c6f765cf9c65a26e4e50c9347c82f2284687cae5f1eaf97ae21b3a"} Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.385976 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.385977 4725 scope.go:117] "RemoveContainer" containerID="735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.406370 4725 scope.go:117] "RemoveContainer" containerID="6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.439301 4725 scope.go:117] "RemoveContainer" containerID="029968eb55c812507bab83444b4f5735976f6601188235dc475e69bc38de138d" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.444944 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.451362 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.932602 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:28 crc kubenswrapper[4725]: E0120 12:10:28.933046 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.943285 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" path="/var/lib/kubelet/pods/149c1b55-088a-4bd8-beaf-ca554aefa16c/volumes" Jan 20 12:10:39 crc kubenswrapper[4725]: I0120 12:10:39.933321 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:39 crc kubenswrapper[4725]: E0120 12:10:39.934497 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:51 crc kubenswrapper[4725]: I0120 12:10:51.931907 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:51 crc kubenswrapper[4725]: E0120 12:10:51.932898 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:02 crc kubenswrapper[4725]: I0120 12:11:02.936577 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:02 crc kubenswrapper[4725]: E0120 12:11:02.937519 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:14 crc kubenswrapper[4725]: I0120 12:11:14.932214 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:14 crc kubenswrapper[4725]: E0120 12:11:14.933645 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:28 crc kubenswrapper[4725]: I0120 12:11:28.933749 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:28 crc kubenswrapper[4725]: E0120 12:11:28.934699 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:42 crc kubenswrapper[4725]: I0120 12:11:42.941540 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:42 crc kubenswrapper[4725]: E0120 12:11:42.942791 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.265538 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:11:49 crc kubenswrapper[4725]: E0120 12:11:49.266693 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-utilities" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266711 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-utilities" Jan 20 12:11:49 crc kubenswrapper[4725]: E0120 12:11:49.266731 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266740 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" Jan 20 12:11:49 crc kubenswrapper[4725]: E0120 12:11:49.266761 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-content" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266768 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-content" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266921 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.268014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.284327 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.338243 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.338522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.338698 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.440371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.440993 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.441189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.441255 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.441615 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.474866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.594208 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:50 crc kubenswrapper[4725]: I0120 12:11:50.107935 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:11:50 crc kubenswrapper[4725]: I0120 12:11:50.253627 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerStarted","Data":"b48ca1fa046c8e287de985bb17fb7828ccd59181d6510c4af368e691e4a7eb94"} Jan 20 12:11:51 crc kubenswrapper[4725]: I0120 12:11:51.266443 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerID="12fc0d4b7a6b440d05aae65bbaf75415b33cc1b772ffbbdf7c18502d8fa4db78" exitCode=0 Jan 20 12:11:51 crc kubenswrapper[4725]: I0120 12:11:51.266536 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"12fc0d4b7a6b440d05aae65bbaf75415b33cc1b772ffbbdf7c18502d8fa4db78"} Jan 20 12:11:53 crc kubenswrapper[4725]: I0120 12:11:53.306879 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerID="b11d2d8a1b0606ecc18cd1499a12a7672ace55137edbf153607ef35e8279f66f" exitCode=0 Jan 20 12:11:53 crc kubenswrapper[4725]: I0120 12:11:53.306962 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"b11d2d8a1b0606ecc18cd1499a12a7672ace55137edbf153607ef35e8279f66f"} Jan 20 12:11:54 crc kubenswrapper[4725]: I0120 12:11:54.932514 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:54 crc kubenswrapper[4725]: E0120 12:11:54.933260 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:55 crc kubenswrapper[4725]: I0120 12:11:55.330622 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerStarted","Data":"fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4"} Jan 20 12:11:55 crc kubenswrapper[4725]: I0120 12:11:55.366421 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5m7sx" podStartSLOduration=3.127371095 podStartE2EDuration="6.36639657s" podCreationTimestamp="2026-01-20 12:11:49 +0000 UTC" firstStartedPulling="2026-01-20 12:11:51.273416221 +0000 UTC m=+4039.481738214" lastFinishedPulling="2026-01-20 12:11:54.512441716 +0000 UTC m=+4042.720763689" observedRunningTime="2026-01-20 12:11:55.361620689 +0000 UTC m=+4043.569942682" watchObservedRunningTime="2026-01-20 12:11:55.36639657 +0000 UTC m=+4043.574718553" Jan 20 12:11:59 crc kubenswrapper[4725]: I0120 12:11:59.594907 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:59 crc kubenswrapper[4725]: I0120 12:11:59.595632 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:59 crc kubenswrapper[4725]: I0120 12:11:59.642196 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:00 crc kubenswrapper[4725]: I0120 12:12:00.445249 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:00 crc kubenswrapper[4725]: I0120 12:12:00.513202 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:12:02 crc kubenswrapper[4725]: I0120 12:12:02.421686 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5m7sx" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" containerID="cri-o://fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4" gracePeriod=2 Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.432821 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerID="fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4" exitCode=0 Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.432900 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4"} Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.433575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"b48ca1fa046c8e287de985bb17fb7828ccd59181d6510c4af368e691e4a7eb94"} Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.433610 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48ca1fa046c8e287de985bb17fb7828ccd59181d6510c4af368e691e4a7eb94" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.475466 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.539336 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.539402 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.539491 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.543220 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities" (OuterVolumeSpecName: "utilities") pod "5f43a5ae-ed9d-43b3-9729-5c1110c63635" (UID: "5f43a5ae-ed9d-43b3-9729-5c1110c63635"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.560408 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq" (OuterVolumeSpecName: "kube-api-access-77qvq") pod "5f43a5ae-ed9d-43b3-9729-5c1110c63635" (UID: "5f43a5ae-ed9d-43b3-9729-5c1110c63635"). InnerVolumeSpecName "kube-api-access-77qvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.592956 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f43a5ae-ed9d-43b3-9729-5c1110c63635" (UID: "5f43a5ae-ed9d-43b3-9729-5c1110c63635"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.641876 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.641930 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") on node \"crc\" DevicePath \"\"" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.641949 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.442326 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.480561 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.488906 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.951885 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" path="/var/lib/kubelet/pods/5f43a5ae-ed9d-43b3-9729-5c1110c63635/volumes" Jan 20 12:12:07 crc kubenswrapper[4725]: I0120 12:12:07.934895 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:07 crc kubenswrapper[4725]: E0120 12:12:07.935359 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:19 crc kubenswrapper[4725]: I0120 12:12:19.934367 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:19 crc kubenswrapper[4725]: E0120 12:12:19.935710 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:30 crc kubenswrapper[4725]: I0120 12:12:30.932938 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:30 crc kubenswrapper[4725]: E0120 12:12:30.933892 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:43 crc kubenswrapper[4725]: I0120 12:12:43.933760 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:43 crc kubenswrapper[4725]: E0120 12:12:43.935205 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:54 crc kubenswrapper[4725]: I0120 12:12:54.932914 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:54 crc kubenswrapper[4725]: E0120 12:12:54.933910 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:13:08 crc kubenswrapper[4725]: I0120 12:13:08.937130 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:13:08 crc kubenswrapper[4725]: E0120 12:13:08.938266 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:13:23 crc kubenswrapper[4725]: I0120 12:13:23.050702 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:13:23 crc kubenswrapper[4725]: E0120 12:13:23.052233 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:13:36 crc kubenswrapper[4725]: I0120 12:13:36.932444 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:13:37 crc kubenswrapper[4725]: I0120 12:13:37.239521 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6"} Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.509685 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:13:50 crc kubenswrapper[4725]: E0120 12:13:50.510985 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-content" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511018 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-content" Jan 20 12:13:50 crc kubenswrapper[4725]: E0120 12:13:50.511048 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511060 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" Jan 20 12:13:50 crc kubenswrapper[4725]: E0120 12:13:50.511104 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-utilities" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511114 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-utilities" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511301 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.512504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.522992 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.697599 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.697702 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.697726 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.799528 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.799613 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.799635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.800336 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.800948 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.823345 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.844769 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:51 crc kubenswrapper[4725]: I0120 12:13:51.539889 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:13:51 crc kubenswrapper[4725]: W0120 12:13:51.546893 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2ef7efe_4c79_4017_903c_aa5ecb307df0.slice/crio-4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85 WatchSource:0}: Error finding container 4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85: Status 404 returned error can't find the container with id 4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85 Jan 20 12:13:52 crc kubenswrapper[4725]: I0120 12:13:52.551840 4725 generic.go:334] "Generic (PLEG): container finished" podID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerID="8b650c3f884771f6b8012af8c700a2a9c63c439a2436778c0694ae94e31d1bf3" exitCode=0 Jan 20 12:13:52 crc kubenswrapper[4725]: I0120 12:13:52.552051 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"8b650c3f884771f6b8012af8c700a2a9c63c439a2436778c0694ae94e31d1bf3"} Jan 20 12:13:52 crc kubenswrapper[4725]: I0120 12:13:52.552250 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerStarted","Data":"4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85"} Jan 20 12:13:53 crc kubenswrapper[4725]: I0120 12:13:53.563665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerStarted","Data":"e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033"} Jan 20 12:13:54 crc kubenswrapper[4725]: I0120 12:13:54.574843 4725 generic.go:334] "Generic (PLEG): container finished" podID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerID="e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033" exitCode=0 Jan 20 12:13:54 crc kubenswrapper[4725]: I0120 12:13:54.574910 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033"} Jan 20 12:13:55 crc kubenswrapper[4725]: I0120 12:13:55.584281 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerStarted","Data":"4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49"} Jan 20 12:13:55 crc kubenswrapper[4725]: I0120 12:13:55.800810 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z6446" podStartSLOduration=3.32266467 podStartE2EDuration="5.800777118s" podCreationTimestamp="2026-01-20 12:13:50 +0000 UTC" firstStartedPulling="2026-01-20 12:13:52.554387721 +0000 UTC m=+4160.762709694" lastFinishedPulling="2026-01-20 12:13:55.032500169 +0000 UTC m=+4163.240822142" observedRunningTime="2026-01-20 12:13:55.797954829 +0000 UTC m=+4164.006276842" watchObservedRunningTime="2026-01-20 12:13:55.800777118 +0000 UTC m=+4164.009099091" Jan 20 12:14:00 crc kubenswrapper[4725]: I0120 12:14:00.846025 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:00 crc kubenswrapper[4725]: I0120 12:14:00.846583 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:00 crc kubenswrapper[4725]: I0120 12:14:00.948120 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:01 crc kubenswrapper[4725]: I0120 12:14:01.714523 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:01 crc kubenswrapper[4725]: I0120 12:14:01.785012 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:14:03 crc kubenswrapper[4725]: I0120 12:14:03.670194 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z6446" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" containerID="cri-o://4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49" gracePeriod=2 Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.687814 4725 generic.go:334] "Generic (PLEG): container finished" podID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerID="4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49" exitCode=0 Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.688062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49"} Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.688263 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85"} Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.688288 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.717701 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.816043 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.816150 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.816337 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.817416 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities" (OuterVolumeSpecName: "utilities") pod "c2ef7efe-4c79-4017-903c-aa5ecb307df0" (UID: "c2ef7efe-4c79-4017-903c-aa5ecb307df0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.833520 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz" (OuterVolumeSpecName: "kube-api-access-hlfwz") pod "c2ef7efe-4c79-4017-903c-aa5ecb307df0" (UID: "c2ef7efe-4c79-4017-903c-aa5ecb307df0"). InnerVolumeSpecName "kube-api-access-hlfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.895995 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2ef7efe-4c79-4017-903c-aa5ecb307df0" (UID: "c2ef7efe-4c79-4017-903c-aa5ecb307df0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.918919 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.918963 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.918974 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:05 crc kubenswrapper[4725]: I0120 12:14:05.694487 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:05 crc kubenswrapper[4725]: I0120 12:14:05.724407 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:14:05 crc kubenswrapper[4725]: I0120 12:14:05.734702 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:14:06 crc kubenswrapper[4725]: I0120 12:14:06.943640 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" path="/var/lib/kubelet/pods/c2ef7efe-4c79-4017-903c-aa5ecb307df0/volumes" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.282685 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:37 crc kubenswrapper[4725]: E0120 12:14:37.283902 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-utilities" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.283923 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-utilities" Jan 20 12:14:37 crc kubenswrapper[4725]: E0120 12:14:37.283946 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.283954 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" Jan 20 12:14:37 crc kubenswrapper[4725]: E0120 12:14:37.283978 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-content" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.283987 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-content" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.284216 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.284959 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.291480 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.430440 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"infrawatch-operators-pcqpc\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.532457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"infrawatch-operators-pcqpc\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.561117 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"infrawatch-operators-pcqpc\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.649858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.923631 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.936447 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:14:38 crc kubenswrapper[4725]: I0120 12:14:38.057674 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerStarted","Data":"5d4ca42ad1acab36b21f0c6b0dc950eb93553276b1ffe4509637de1202cc10fa"} Jan 20 12:14:39 crc kubenswrapper[4725]: I0120 12:14:39.070466 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerStarted","Data":"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91"} Jan 20 12:14:39 crc kubenswrapper[4725]: I0120 12:14:39.096468 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-pcqpc" podStartSLOduration=1.8763073590000001 podStartE2EDuration="2.096423194s" podCreationTimestamp="2026-01-20 12:14:37 +0000 UTC" firstStartedPulling="2026-01-20 12:14:37.936131586 +0000 UTC m=+4206.144453559" lastFinishedPulling="2026-01-20 12:14:38.156247421 +0000 UTC m=+4206.364569394" observedRunningTime="2026-01-20 12:14:39.090615421 +0000 UTC m=+4207.298937424" watchObservedRunningTime="2026-01-20 12:14:39.096423194 +0000 UTC m=+4207.304745207" Jan 20 12:14:47 crc kubenswrapper[4725]: I0120 12:14:47.651728 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:47 crc kubenswrapper[4725]: I0120 12:14:47.652747 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:47 crc kubenswrapper[4725]: I0120 12:14:47.713636 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:48 crc kubenswrapper[4725]: I0120 12:14:48.228734 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:49 crc kubenswrapper[4725]: I0120 12:14:49.025017 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.205423 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-pcqpc" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" containerID="cri-o://6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" gracePeriod=2 Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.635950 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.800869 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.823489 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w" (OuterVolumeSpecName: "kube-api-access-8mc9w") pod "3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" (UID: "3afa6bc5-f864-43f4-9eb4-a7dbc8de5893"). InnerVolumeSpecName "kube-api-access-8mc9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.904038 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218659 4725 generic.go:334] "Generic (PLEG): container finished" podID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" exitCode=0 Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218736 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerDied","Data":"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91"} Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218819 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerDied","Data":"5d4ca42ad1acab36b21f0c6b0dc950eb93553276b1ffe4509637de1202cc10fa"} Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218842 4725 scope.go:117] "RemoveContainer" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.221005 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.256146 4725 scope.go:117] "RemoveContainer" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" Jan 20 12:14:51 crc kubenswrapper[4725]: E0120 12:14:51.258945 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91\": container with ID starting with 6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91 not found: ID does not exist" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.259338 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91"} err="failed to get container status \"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91\": rpc error: code = NotFound desc = could not find container \"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91\": container with ID starting with 6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91 not found: ID does not exist" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.269534 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.278003 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:52 crc kubenswrapper[4725]: I0120 12:14:52.949182 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" path="/var/lib/kubelet/pods/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893/volumes" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.199611 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl"] Jan 20 12:15:00 crc kubenswrapper[4725]: E0120 12:15:00.203499 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.203519 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.203677 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.204269 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.206572 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.207222 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.217519 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl"] Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.294474 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.294607 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.294677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.396266 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.396351 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.396462 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.398315 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.414229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.418412 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.523689 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.821942 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl"] Jan 20 12:15:01 crc kubenswrapper[4725]: I0120 12:15:01.337058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerStarted","Data":"fa4cf535d5e81a4cf0ea0b637ecfa36dfafb70bf14c9057ee7b5f5e6043e358e"} Jan 20 12:15:01 crc kubenswrapper[4725]: I0120 12:15:01.337140 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerStarted","Data":"917d494b66019293ca66267c446d95a9639ed0de12bcb3eac631abc66f0d47a7"} Jan 20 12:15:01 crc kubenswrapper[4725]: I0120 12:15:01.377646 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" podStartSLOduration=1.377607866 podStartE2EDuration="1.377607866s" podCreationTimestamp="2026-01-20 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 12:15:01.36156985 +0000 UTC m=+4229.569891833" watchObservedRunningTime="2026-01-20 12:15:01.377607866 +0000 UTC m=+4229.585929849" Jan 20 12:15:02 crc kubenswrapper[4725]: I0120 12:15:02.345627 4725 generic.go:334] "Generic (PLEG): container finished" podID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerID="fa4cf535d5e81a4cf0ea0b637ecfa36dfafb70bf14c9057ee7b5f5e6043e358e" exitCode=0 Jan 20 12:15:02 crc kubenswrapper[4725]: I0120 12:15:02.345753 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerDied","Data":"fa4cf535d5e81a4cf0ea0b637ecfa36dfafb70bf14c9057ee7b5f5e6043e358e"} Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.622306 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.757052 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"604a5ea1-fb17-44e8-9c63-30238fdea94d\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.757179 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"604a5ea1-fb17-44e8-9c63-30238fdea94d\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.757226 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"604a5ea1-fb17-44e8-9c63-30238fdea94d\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.759588 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume" (OuterVolumeSpecName: "config-volume") pod "604a5ea1-fb17-44e8-9c63-30238fdea94d" (UID: "604a5ea1-fb17-44e8-9c63-30238fdea94d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.767284 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr" (OuterVolumeSpecName: "kube-api-access-wl4lr") pod "604a5ea1-fb17-44e8-9c63-30238fdea94d" (UID: "604a5ea1-fb17-44e8-9c63-30238fdea94d"). InnerVolumeSpecName "kube-api-access-wl4lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.782981 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "604a5ea1-fb17-44e8-9c63-30238fdea94d" (UID: "604a5ea1-fb17-44e8-9c63-30238fdea94d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.859598 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") on node \"crc\" DevicePath \"\"" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.859673 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.859686 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.370865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerDied","Data":"917d494b66019293ca66267c446d95a9639ed0de12bcb3eac631abc66f0d47a7"} Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.370926 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="917d494b66019293ca66267c446d95a9639ed0de12bcb3eac631abc66f0d47a7" Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.370954 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.721625 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.731354 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.943984 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" path="/var/lib/kubelet/pods/0fdb152c-7b26-4ed6-8bb8-6a846224c67b/volumes" Jan 20 12:15:18 crc kubenswrapper[4725]: I0120 12:15:18.050393 4725 scope.go:117] "RemoveContainer" containerID="19fb964594f75fcdba986836c9a966bf2aa65e41d99e7666a933d08acb12b332" Jan 20 12:15:56 crc kubenswrapper[4725]: I0120 12:15:56.729464 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:15:56 crc kubenswrapper[4725]: I0120 12:15:56.730323 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:16:26 crc kubenswrapper[4725]: I0120 12:16:26.727722 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:16:26 crc kubenswrapper[4725]: I0120 12:16:26.728567 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.728229 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.728971 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.729057 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.730006 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.730303 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6" gracePeriod=600 Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.601821 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6" exitCode=0 Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.601865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6"} Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.602385 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d"} Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.602450 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:18:18 crc kubenswrapper[4725]: I0120 12:18:18.195578 4725 scope.go:117] "RemoveContainer" containerID="fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4" Jan 20 12:18:18 crc kubenswrapper[4725]: I0120 12:18:18.224440 4725 scope.go:117] "RemoveContainer" containerID="12fc0d4b7a6b440d05aae65bbaf75415b33cc1b772ffbbdf7c18502d8fa4db78" Jan 20 12:18:18 crc kubenswrapper[4725]: I0120 12:18:18.244733 4725 scope.go:117] "RemoveContainer" containerID="b11d2d8a1b0606ecc18cd1499a12a7672ace55137edbf153607ef35e8279f66f" Jan 20 12:19:26 crc kubenswrapper[4725]: I0120 12:19:26.727898 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:19:26 crc kubenswrapper[4725]: I0120 12:19:26.730467 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:19:56 crc kubenswrapper[4725]: I0120 12:19:56.728727 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:19:56 crc kubenswrapper[4725]: I0120 12:19:56.729898 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:20:18 crc kubenswrapper[4725]: I0120 12:20:18.367196 4725 scope.go:117] "RemoveContainer" containerID="e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033" Jan 20 12:20:18 crc kubenswrapper[4725]: I0120 12:20:18.424019 4725 scope.go:117] "RemoveContainer" containerID="8b650c3f884771f6b8012af8c700a2a9c63c439a2436778c0694ae94e31d1bf3" Jan 20 12:20:18 crc kubenswrapper[4725]: I0120 12:20:18.475890 4725 scope.go:117] "RemoveContainer" containerID="4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.174701 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:23 crc kubenswrapper[4725]: E0120 12:20:23.175046 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerName="collect-profiles" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.175073 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerName="collect-profiles" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.175269 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerName="collect-profiles" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.175842 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.190406 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.371488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"infrawatch-operators-lp67f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.474656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"infrawatch-operators-lp67f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.496353 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"infrawatch-operators-lp67f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.510191 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.953239 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.975692 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:20:24 crc kubenswrapper[4725]: I0120 12:20:24.203540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerStarted","Data":"9e145c53b4b86d35224ef71ce470b3f0b816b43113b52c7bf1cdd1fa40715647"} Jan 20 12:20:25 crc kubenswrapper[4725]: I0120 12:20:25.214490 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerStarted","Data":"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f"} Jan 20 12:20:25 crc kubenswrapper[4725]: I0120 12:20:25.241658 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-lp67f" podStartSLOduration=2.098983361 podStartE2EDuration="2.241626331s" podCreationTimestamp="2026-01-20 12:20:23 +0000 UTC" firstStartedPulling="2026-01-20 12:20:23.975274346 +0000 UTC m=+4552.183596319" lastFinishedPulling="2026-01-20 12:20:24.117917316 +0000 UTC m=+4552.326239289" observedRunningTime="2026-01-20 12:20:25.2352512 +0000 UTC m=+4553.443573193" watchObservedRunningTime="2026-01-20 12:20:25.241626331 +0000 UTC m=+4553.449948294" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.728126 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.728616 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.728707 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.729609 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.729679 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" gracePeriod=600 Jan 20 12:20:26 crc kubenswrapper[4725]: E0120 12:20:26.854336 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.234845 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" exitCode=0 Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.235245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d"} Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.235404 4725 scope.go:117] "RemoveContainer" containerID="99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6" Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.236222 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:20:27 crc kubenswrapper[4725]: E0120 12:20:27.236533 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:20:33 crc kubenswrapper[4725]: I0120 12:20:33.510454 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:33 crc kubenswrapper[4725]: I0120 12:20:33.511300 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:33 crc kubenswrapper[4725]: I0120 12:20:33.540959 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:34 crc kubenswrapper[4725]: I0120 12:20:34.323243 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:34 crc kubenswrapper[4725]: I0120 12:20:34.364846 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:36 crc kubenswrapper[4725]: I0120 12:20:36.310570 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-lp67f" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" containerID="cri-o://4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" gracePeriod=2 Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.262669 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.319735 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"7640ce90-ea6e-4f5c-af78-5502daee755f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322470 4725 generic.go:334] "Generic (PLEG): container finished" podID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" exitCode=0 Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerDied","Data":"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f"} Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322543 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322577 4725 scope.go:117] "RemoveContainer" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322564 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerDied","Data":"9e145c53b4b86d35224ef71ce470b3f0b816b43113b52c7bf1cdd1fa40715647"} Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.328884 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz" (OuterVolumeSpecName: "kube-api-access-9zckz") pod "7640ce90-ea6e-4f5c-af78-5502daee755f" (UID: "7640ce90-ea6e-4f5c-af78-5502daee755f"). InnerVolumeSpecName "kube-api-access-9zckz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.371923 4725 scope.go:117] "RemoveContainer" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" Jan 20 12:20:37 crc kubenswrapper[4725]: E0120 12:20:37.372519 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f\": container with ID starting with 4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f not found: ID does not exist" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.372557 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f"} err="failed to get container status \"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f\": rpc error: code = NotFound desc = could not find container \"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f\": container with ID starting with 4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f not found: ID does not exist" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.421101 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") on node \"crc\" DevicePath \"\"" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.656146 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.663102 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:38 crc kubenswrapper[4725]: I0120 12:20:38.943398 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" path="/var/lib/kubelet/pods/7640ce90-ea6e-4f5c-af78-5502daee755f/volumes" Jan 20 12:20:41 crc kubenswrapper[4725]: I0120 12:20:41.933321 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:20:41 crc kubenswrapper[4725]: E0120 12:20:41.933801 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:20:54 crc kubenswrapper[4725]: I0120 12:20:54.933022 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:20:54 crc kubenswrapper[4725]: E0120 12:20:54.934040 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.836263 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:01 crc kubenswrapper[4725]: E0120 12:21:01.837428 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.837448 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.837698 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.839033 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.867662 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.997756 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.997986 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.998215 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.099673 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.099787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.099842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.100591 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.100591 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.137459 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.173695 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.622931 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.671776 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerStarted","Data":"bcf2688f9c9fd6b567706ba480c4ac8ffd3d7103e7a910e08e35b300f702ec49"} Jan 20 12:21:03 crc kubenswrapper[4725]: I0120 12:21:03.683381 4725 generic.go:334] "Generic (PLEG): container finished" podID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" exitCode=0 Jan 20 12:21:03 crc kubenswrapper[4725]: I0120 12:21:03.683470 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea"} Jan 20 12:21:04 crc kubenswrapper[4725]: I0120 12:21:04.692348 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerStarted","Data":"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95"} Jan 20 12:21:07 crc kubenswrapper[4725]: I0120 12:21:07.720386 4725 generic.go:334] "Generic (PLEG): container finished" podID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" exitCode=0 Jan 20 12:21:07 crc kubenswrapper[4725]: I0120 12:21:07.720629 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95"} Jan 20 12:21:07 crc kubenswrapper[4725]: I0120 12:21:07.933394 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:07 crc kubenswrapper[4725]: E0120 12:21:07.933716 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:09 crc kubenswrapper[4725]: I0120 12:21:09.746119 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerStarted","Data":"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a"} Jan 20 12:21:09 crc kubenswrapper[4725]: I0120 12:21:09.791364 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-658h8" podStartSLOduration=3.474937984 podStartE2EDuration="8.791280282s" podCreationTimestamp="2026-01-20 12:21:01 +0000 UTC" firstStartedPulling="2026-01-20 12:21:03.685424384 +0000 UTC m=+4591.893746367" lastFinishedPulling="2026-01-20 12:21:09.001766692 +0000 UTC m=+4597.210088665" observedRunningTime="2026-01-20 12:21:09.782401072 +0000 UTC m=+4597.990723045" watchObservedRunningTime="2026-01-20 12:21:09.791280282 +0000 UTC m=+4597.999602255" Jan 20 12:21:12 crc kubenswrapper[4725]: I0120 12:21:12.176024 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:12 crc kubenswrapper[4725]: I0120 12:21:12.176631 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:13 crc kubenswrapper[4725]: I0120 12:21:13.248609 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-658h8" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" probeResult="failure" output=< Jan 20 12:21:13 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 12:21:13 crc kubenswrapper[4725]: > Jan 20 12:21:21 crc kubenswrapper[4725]: I0120 12:21:21.933260 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:21 crc kubenswrapper[4725]: E0120 12:21:21.934748 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:22 crc kubenswrapper[4725]: I0120 12:21:22.239544 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:22 crc kubenswrapper[4725]: I0120 12:21:22.286633 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:22 crc kubenswrapper[4725]: I0120 12:21:22.482187 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:23 crc kubenswrapper[4725]: I0120 12:21:23.984914 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-658h8" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" containerID="cri-o://19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" gracePeriod=2 Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.510646 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.640005 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.640200 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.640269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.641629 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities" (OuterVolumeSpecName: "utilities") pod "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" (UID: "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.648882 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp" (OuterVolumeSpecName: "kube-api-access-w6bkp") pod "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" (UID: "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6"). InnerVolumeSpecName "kube-api-access-w6bkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.742145 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.742188 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") on node \"crc\" DevicePath \"\"" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.778584 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" (UID: "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.843467 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997506 4725 generic.go:334] "Generic (PLEG): container finished" podID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" exitCode=0 Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a"} Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997628 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"bcf2688f9c9fd6b567706ba480c4ac8ffd3d7103e7a910e08e35b300f702ec49"} Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997649 4725 scope.go:117] "RemoveContainer" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997664 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.037749 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.040676 4725 scope.go:117] "RemoveContainer" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.042593 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.072237 4725 scope.go:117] "RemoveContainer" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.108922 4725 scope.go:117] "RemoveContainer" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" Jan 20 12:21:25 crc kubenswrapper[4725]: E0120 12:21:25.109648 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a\": container with ID starting with 19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a not found: ID does not exist" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.109687 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a"} err="failed to get container status \"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a\": rpc error: code = NotFound desc = could not find container \"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a\": container with ID starting with 19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a not found: ID does not exist" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.109722 4725 scope.go:117] "RemoveContainer" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" Jan 20 12:21:25 crc kubenswrapper[4725]: E0120 12:21:25.110411 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95\": container with ID starting with 009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95 not found: ID does not exist" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.110482 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95"} err="failed to get container status \"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95\": rpc error: code = NotFound desc = could not find container \"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95\": container with ID starting with 009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95 not found: ID does not exist" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.110532 4725 scope.go:117] "RemoveContainer" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" Jan 20 12:21:25 crc kubenswrapper[4725]: E0120 12:21:25.111036 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea\": container with ID starting with 51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea not found: ID does not exist" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.111133 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea"} err="failed to get container status \"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea\": rpc error: code = NotFound desc = could not find container \"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea\": container with ID starting with 51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea not found: ID does not exist" Jan 20 12:21:26 crc kubenswrapper[4725]: I0120 12:21:26.966262 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" path="/var/lib/kubelet/pods/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6/volumes" Jan 20 12:21:34 crc kubenswrapper[4725]: I0120 12:21:34.934038 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:34 crc kubenswrapper[4725]: E0120 12:21:34.936899 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:48 crc kubenswrapper[4725]: I0120 12:21:48.936937 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:48 crc kubenswrapper[4725]: E0120 12:21:48.937899 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:59 crc kubenswrapper[4725]: I0120 12:21:59.933294 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:59 crc kubenswrapper[4725]: E0120 12:21:59.934284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:11 crc kubenswrapper[4725]: I0120 12:22:11.933132 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:11 crc kubenswrapper[4725]: E0120 12:22:11.934099 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.373034 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:19 crc kubenswrapper[4725]: E0120 12:22:19.374468 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-content" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374489 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-content" Jan 20 12:22:19 crc kubenswrapper[4725]: E0120 12:22:19.374533 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-utilities" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374543 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-utilities" Jan 20 12:22:19 crc kubenswrapper[4725]: E0120 12:22:19.374584 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374595 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374903 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.376753 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.382238 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.545902 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.545987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.546052 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.648716 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.648849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.648910 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.649674 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.649998 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.688390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.721579 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.127547 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.529703 4725 generic.go:334] "Generic (PLEG): container finished" podID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" exitCode=0 Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.529765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255"} Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.529798 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerStarted","Data":"f1b75603941b80db17c2ee1dc8d105b359287d074919b28b90768bb82fd3ba6f"} Jan 20 12:22:22 crc kubenswrapper[4725]: I0120 12:22:22.550921 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerStarted","Data":"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846"} Jan 20 12:22:23 crc kubenswrapper[4725]: I0120 12:22:23.595264 4725 generic.go:334] "Generic (PLEG): container finished" podID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" exitCode=0 Jan 20 12:22:23 crc kubenswrapper[4725]: I0120 12:22:23.595666 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846"} Jan 20 12:22:24 crc kubenswrapper[4725]: I0120 12:22:24.611989 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerStarted","Data":"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78"} Jan 20 12:22:24 crc kubenswrapper[4725]: I0120 12:22:24.727973 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w8p54" podStartSLOduration=2.112529183 podStartE2EDuration="5.727939014s" podCreationTimestamp="2026-01-20 12:22:19 +0000 UTC" firstStartedPulling="2026-01-20 12:22:20.531948155 +0000 UTC m=+4668.740270128" lastFinishedPulling="2026-01-20 12:22:24.147357966 +0000 UTC m=+4672.355679959" observedRunningTime="2026-01-20 12:22:24.720791308 +0000 UTC m=+4672.929113291" watchObservedRunningTime="2026-01-20 12:22:24.727939014 +0000 UTC m=+4672.936260997" Jan 20 12:22:26 crc kubenswrapper[4725]: I0120 12:22:26.933745 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:26 crc kubenswrapper[4725]: E0120 12:22:26.934542 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:29 crc kubenswrapper[4725]: I0120 12:22:29.722554 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:29 crc kubenswrapper[4725]: I0120 12:22:29.723129 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:29 crc kubenswrapper[4725]: I0120 12:22:29.782170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:30 crc kubenswrapper[4725]: I0120 12:22:30.904176 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:31 crc kubenswrapper[4725]: I0120 12:22:31.001794 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:32 crc kubenswrapper[4725]: I0120 12:22:32.696283 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w8p54" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" containerID="cri-o://4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" gracePeriod=2 Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.152126 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.205651 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"fdab8aea-b316-46bd-8ef3-419256bf52ae\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.206490 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"fdab8aea-b316-46bd-8ef3-419256bf52ae\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.206544 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"fdab8aea-b316-46bd-8ef3-419256bf52ae\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.208014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities" (OuterVolumeSpecName: "utilities") pod "fdab8aea-b316-46bd-8ef3-419256bf52ae" (UID: "fdab8aea-b316-46bd-8ef3-419256bf52ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.221994 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8" (OuterVolumeSpecName: "kube-api-access-qkzt8") pod "fdab8aea-b316-46bd-8ef3-419256bf52ae" (UID: "fdab8aea-b316-46bd-8ef3-419256bf52ae"). InnerVolumeSpecName "kube-api-access-qkzt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.257371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdab8aea-b316-46bd-8ef3-419256bf52ae" (UID: "fdab8aea-b316-46bd-8ef3-419256bf52ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.308418 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.308471 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.308485 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") on node \"crc\" DevicePath \"\"" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821591 4725 generic.go:334] "Generic (PLEG): container finished" podID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" exitCode=0 Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821709 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821708 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78"} Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"f1b75603941b80db17c2ee1dc8d105b359287d074919b28b90768bb82fd3ba6f"} Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821983 4725 scope.go:117] "RemoveContainer" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.848844 4725 scope.go:117] "RemoveContainer" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.893406 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.900054 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.909157 4725 scope.go:117] "RemoveContainer" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.934218 4725 scope.go:117] "RemoveContainer" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" Jan 20 12:22:34 crc kubenswrapper[4725]: E0120 12:22:34.934875 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78\": container with ID starting with 4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78 not found: ID does not exist" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.934938 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78"} err="failed to get container status \"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78\": rpc error: code = NotFound desc = could not find container \"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78\": container with ID starting with 4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78 not found: ID does not exist" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.934977 4725 scope.go:117] "RemoveContainer" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" Jan 20 12:22:34 crc kubenswrapper[4725]: E0120 12:22:34.935438 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846\": container with ID starting with 13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846 not found: ID does not exist" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.935465 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846"} err="failed to get container status \"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846\": rpc error: code = NotFound desc = could not find container \"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846\": container with ID starting with 13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846 not found: ID does not exist" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.935500 4725 scope.go:117] "RemoveContainer" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" Jan 20 12:22:34 crc kubenswrapper[4725]: E0120 12:22:34.935865 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255\": container with ID starting with 3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255 not found: ID does not exist" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.935919 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255"} err="failed to get container status \"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255\": rpc error: code = NotFound desc = could not find container \"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255\": container with ID starting with 3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255 not found: ID does not exist" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.944842 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" path="/var/lib/kubelet/pods/fdab8aea-b316-46bd-8ef3-419256bf52ae/volumes" Jan 20 12:22:40 crc kubenswrapper[4725]: I0120 12:22:40.935584 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:40 crc kubenswrapper[4725]: E0120 12:22:40.936601 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:54 crc kubenswrapper[4725]: I0120 12:22:54.932548 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:54 crc kubenswrapper[4725]: E0120 12:22:54.933529 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:09 crc kubenswrapper[4725]: I0120 12:23:09.934547 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:09 crc kubenswrapper[4725]: E0120 12:23:09.936053 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:20 crc kubenswrapper[4725]: I0120 12:23:20.933017 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:20 crc kubenswrapper[4725]: E0120 12:23:20.934187 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:31 crc kubenswrapper[4725]: I0120 12:23:31.932130 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:31 crc kubenswrapper[4725]: E0120 12:23:31.933271 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:43 crc kubenswrapper[4725]: I0120 12:23:43.933488 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:43 crc kubenswrapper[4725]: E0120 12:23:43.934703 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.917625 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:23:51 crc kubenswrapper[4725]: E0120 12:23:51.918794 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-content" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.918848 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-content" Jan 20 12:23:51 crc kubenswrapper[4725]: E0120 12:23:51.918887 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-utilities" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.918899 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-utilities" Jan 20 12:23:51 crc kubenswrapper[4725]: E0120 12:23:51.918907 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.918916 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.919145 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.920635 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.925446 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.114579 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.114696 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.114787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.216932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217242 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217559 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.252478 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.544396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.848526 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:23:53 crc kubenswrapper[4725]: I0120 12:23:53.146880 4725 generic.go:334] "Generic (PLEG): container finished" podID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" exitCode=0 Jan 20 12:23:53 crc kubenswrapper[4725]: I0120 12:23:53.146947 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f"} Jan 20 12:23:53 crc kubenswrapper[4725]: I0120 12:23:53.147020 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerStarted","Data":"62b46a944066ca300e9e0e9f1441b3c5d70a48ee5cec6affb2a56a533f232b74"} Jan 20 12:23:54 crc kubenswrapper[4725]: I0120 12:23:54.933207 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:54 crc kubenswrapper[4725]: E0120 12:23:54.933807 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:55 crc kubenswrapper[4725]: I0120 12:23:55.175380 4725 generic.go:334] "Generic (PLEG): container finished" podID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" exitCode=0 Jan 20 12:23:55 crc kubenswrapper[4725]: I0120 12:23:55.175446 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b"} Jan 20 12:23:56 crc kubenswrapper[4725]: I0120 12:23:56.198401 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerStarted","Data":"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef"} Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.544821 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.545963 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.606876 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.641456 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2g954" podStartSLOduration=8.899162263000001 podStartE2EDuration="11.641420495s" podCreationTimestamp="2026-01-20 12:23:51 +0000 UTC" firstStartedPulling="2026-01-20 12:23:53.149487472 +0000 UTC m=+4761.357809445" lastFinishedPulling="2026-01-20 12:23:55.891745704 +0000 UTC m=+4764.100067677" observedRunningTime="2026-01-20 12:23:56.228312873 +0000 UTC m=+4764.436634846" watchObservedRunningTime="2026-01-20 12:24:02.641420495 +0000 UTC m=+4770.849742468" Jan 20 12:24:03 crc kubenswrapper[4725]: I0120 12:24:03.390665 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:03 crc kubenswrapper[4725]: I0120 12:24:03.447351 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:24:05 crc kubenswrapper[4725]: I0120 12:24:05.305577 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2g954" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" containerID="cri-o://3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" gracePeriod=2 Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.386888 4725 generic.go:334] "Generic (PLEG): container finished" podID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" exitCode=0 Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387367 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef"} Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"62b46a944066ca300e9e0e9f1441b3c5d70a48ee5cec6affb2a56a533f232b74"} Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387575 4725 scope.go:117] "RemoveContainer" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.394404 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.394462 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.394571 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.395927 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities" (OuterVolumeSpecName: "utilities") pod "077a41f9-bfcb-47c4-b8de-f003ae7384ca" (UID: "077a41f9-bfcb-47c4-b8de-f003ae7384ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.404583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn" (OuterVolumeSpecName: "kube-api-access-xrkdn") pod "077a41f9-bfcb-47c4-b8de-f003ae7384ca" (UID: "077a41f9-bfcb-47c4-b8de-f003ae7384ca"). InnerVolumeSpecName "kube-api-access-xrkdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.421694 4725 scope.go:117] "RemoveContainer" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.450936 4725 scope.go:117] "RemoveContainer" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.472623 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "077a41f9-bfcb-47c4-b8de-f003ae7384ca" (UID: "077a41f9-bfcb-47c4-b8de-f003ae7384ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.480572 4725 scope.go:117] "RemoveContainer" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" Jan 20 12:24:06 crc kubenswrapper[4725]: E0120 12:24:06.481359 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef\": container with ID starting with 3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef not found: ID does not exist" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.481418 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef"} err="failed to get container status \"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef\": rpc error: code = NotFound desc = could not find container \"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef\": container with ID starting with 3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef not found: ID does not exist" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.481461 4725 scope.go:117] "RemoveContainer" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" Jan 20 12:24:06 crc kubenswrapper[4725]: E0120 12:24:06.482166 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b\": container with ID starting with a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b not found: ID does not exist" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.482233 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b"} err="failed to get container status \"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b\": rpc error: code = NotFound desc = could not find container \"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b\": container with ID starting with a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b not found: ID does not exist" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.482267 4725 scope.go:117] "RemoveContainer" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" Jan 20 12:24:06 crc kubenswrapper[4725]: E0120 12:24:06.482731 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f\": container with ID starting with a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f not found: ID does not exist" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.482761 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f"} err="failed to get container status \"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f\": rpc error: code = NotFound desc = could not find container \"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f\": container with ID starting with a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f not found: ID does not exist" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.496370 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.496402 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") on node \"crc\" DevicePath \"\"" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.496418 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.401535 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.438052 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.445398 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.932825 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:07 crc kubenswrapper[4725]: E0120 12:24:07.934805 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:08 crc kubenswrapper[4725]: I0120 12:24:08.948365 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" path="/var/lib/kubelet/pods/077a41f9-bfcb-47c4-b8de-f003ae7384ca/volumes" Jan 20 12:24:18 crc kubenswrapper[4725]: I0120 12:24:18.933973 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:18 crc kubenswrapper[4725]: E0120 12:24:18.934981 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:31 crc kubenswrapper[4725]: I0120 12:24:31.933863 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:31 crc kubenswrapper[4725]: E0120 12:24:31.935255 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:42 crc kubenswrapper[4725]: I0120 12:24:42.940526 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:42 crc kubenswrapper[4725]: E0120 12:24:42.941808 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:56 crc kubenswrapper[4725]: I0120 12:24:56.935760 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:56 crc kubenswrapper[4725]: E0120 12:24:56.937160 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:25:10 crc kubenswrapper[4725]: I0120 12:25:10.932980 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:25:10 crc kubenswrapper[4725]: E0120 12:25:10.934349 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:25:23 crc kubenswrapper[4725]: I0120 12:25:23.933986 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:25:23 crc kubenswrapper[4725]: E0120 12:25:23.934781 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:25:34 crc kubenswrapper[4725]: I0120 12:25:34.932663 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:25:35 crc kubenswrapper[4725]: I0120 12:25:35.576506 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"adcc73ceecbc4583b032a69bd929a281091ea5ff89f855bfb4e2fea34e05779a"} Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.239297 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:25:51 crc kubenswrapper[4725]: E0120 12:25:51.240606 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240635 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" Jan 20 12:25:51 crc kubenswrapper[4725]: E0120 12:25:51.240662 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-utilities" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240671 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-utilities" Jan 20 12:25:51 crc kubenswrapper[4725]: E0120 12:25:51.240695 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-content" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240704 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-content" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240904 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.241697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.260724 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.348105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"infrawatch-operators-shf8t\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.449817 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"infrawatch-operators-shf8t\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.512059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"infrawatch-operators-shf8t\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.566581 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.987006 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:25:51 crc kubenswrapper[4725]: W0120 12:25:51.995129 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod171b1e77_c3d2_43eb_9915_3df404db0c2c.slice/crio-e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9 WatchSource:0}: Error finding container e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9: Status 404 returned error can't find the container with id e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9 Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.998926 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:25:52 crc kubenswrapper[4725]: I0120 12:25:52.771151 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerStarted","Data":"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3"} Jan 20 12:25:52 crc kubenswrapper[4725]: I0120 12:25:52.771233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerStarted","Data":"e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9"} Jan 20 12:25:52 crc kubenswrapper[4725]: I0120 12:25:52.816234 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-shf8t" podStartSLOduration=1.692880178 podStartE2EDuration="1.816206659s" podCreationTimestamp="2026-01-20 12:25:51 +0000 UTC" firstStartedPulling="2026-01-20 12:25:51.99851853 +0000 UTC m=+4880.206840503" lastFinishedPulling="2026-01-20 12:25:52.121845011 +0000 UTC m=+4880.330166984" observedRunningTime="2026-01-20 12:25:52.7864352 +0000 UTC m=+4880.994757233" watchObservedRunningTime="2026-01-20 12:25:52.816206659 +0000 UTC m=+4881.024528642" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.582355 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.583262 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.642336 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.891469 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:02 crc kubenswrapper[4725]: I0120 12:26:02.007443 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:26:03 crc kubenswrapper[4725]: I0120 12:26:03.880828 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-shf8t" podUID="171b1e77-c3d2-43eb-9915-3df404db0c2c" containerName="registry-server" containerID="cri-o://f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" gracePeriod=2 Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.289336 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.451216 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"171b1e77-c3d2-43eb-9915-3df404db0c2c\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.460512 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k" (OuterVolumeSpecName: "kube-api-access-vtr9k") pod "171b1e77-c3d2-43eb-9915-3df404db0c2c" (UID: "171b1e77-c3d2-43eb-9915-3df404db0c2c"). InnerVolumeSpecName "kube-api-access-vtr9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.553236 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") on node \"crc\" DevicePath \"\"" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.896937 4725 generic.go:334] "Generic (PLEG): container finished" podID="171b1e77-c3d2-43eb-9915-3df404db0c2c" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" exitCode=0 Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.897132 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.897141 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerDied","Data":"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3"} Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.898415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerDied","Data":"e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9"} Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.898460 4725 scope.go:117] "RemoveContainer" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.926694 4725 scope.go:117] "RemoveContainer" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" Jan 20 12:26:04 crc kubenswrapper[4725]: E0120 12:26:04.927385 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3\": container with ID starting with f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3 not found: ID does not exist" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.927444 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3"} err="failed to get container status \"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3\": rpc error: code = NotFound desc = could not find container \"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3\": container with ID starting with f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3 not found: ID does not exist" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.956805 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.965068 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:26:06 crc kubenswrapper[4725]: I0120 12:26:06.947491 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171b1e77-c3d2-43eb-9915-3df404db0c2c" path="/var/lib/kubelet/pods/171b1e77-c3d2-43eb-9915-3df404db0c2c/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515133672234024453 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015133672235017371 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015133660170016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015133660170015457 5ustar corecore